00:00:00.015 Started by upstream project "autotest-per-patch" build number 127087 00:00:00.015 originally caused by: 00:00:00.015 Started by user sys_sgci 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.150 Fetching changes from the remote Git repository 00:00:00.152 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.213 Using shallow fetch with depth 1 00:00:00.213 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.213 > git --version # timeout=10 00:00:00.247 > git --version # 'git version 2.39.2' 00:00:00.247 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.276 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.276 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.868 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.880 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.891 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.891 > git config core.sparsecheckout # timeout=10 00:00:05.902 > git read-tree -mu HEAD # timeout=10 00:00:05.917 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.949 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.949 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.044 [Pipeline] Start of Pipeline 00:00:06.061 [Pipeline] library 00:00:06.064 Loading library shm_lib@master 00:00:06.064 Library shm_lib@master is cached. Copying from home. 00:00:06.083 [Pipeline] node 00:00:06.098 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.099 [Pipeline] { 00:00:06.111 [Pipeline] catchError 00:00:06.113 [Pipeline] { 00:00:06.127 [Pipeline] wrap 00:00:06.138 [Pipeline] { 00:00:06.146 [Pipeline] stage 00:00:06.149 [Pipeline] { (Prologue) 00:00:06.325 [Pipeline] sh 00:00:06.606 + logger -p user.info -t JENKINS-CI 00:00:06.640 [Pipeline] echo 00:00:06.643 Node: WFP22 00:00:06.651 [Pipeline] sh 00:00:06.955 [Pipeline] setCustomBuildProperty 00:00:06.969 [Pipeline] echo 00:00:06.971 Cleanup processes 00:00:06.975 [Pipeline] sh 00:00:07.256 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.257 1221936 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.269 [Pipeline] sh 00:00:07.553 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.553 ++ grep -v 'sudo pgrep' 00:00:07.553 ++ awk '{print $1}' 00:00:07.553 + sudo kill -9 00:00:07.553 + true 00:00:07.566 [Pipeline] cleanWs 00:00:07.575 [WS-CLEANUP] Deleting project workspace... 00:00:07.575 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.581 [WS-CLEANUP] done 00:00:07.584 [Pipeline] setCustomBuildProperty 00:00:07.598 [Pipeline] sh 00:00:07.877 + sudo git config --global --replace-all safe.directory '*' 00:00:07.944 [Pipeline] httpRequest 00:00:07.965 [Pipeline] echo 00:00:07.966 Sorcerer 10.211.164.101 is alive 00:00:07.972 [Pipeline] httpRequest 00:00:07.975 HttpMethod: GET 00:00:07.976 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.976 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.991 Response Code: HTTP/1.1 200 OK 00:00:07.991 Success: Status code 200 is in the accepted range: 200,404 00:00:07.992 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.337 [Pipeline] sh 00:00:10.624 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.638 [Pipeline] httpRequest 00:00:10.651 [Pipeline] echo 00:00:10.653 Sorcerer 10.211.164.101 is alive 00:00:10.661 [Pipeline] httpRequest 00:00:10.666 HttpMethod: GET 00:00:10.666 URL: http://10.211.164.101/packages/spdk_dca21ec0f3ec663fb113b85210d70609a04f98a9.tar.gz 00:00:10.667 Sending request to url: http://10.211.164.101/packages/spdk_dca21ec0f3ec663fb113b85210d70609a04f98a9.tar.gz 00:00:10.669 Response Code: HTTP/1.1 200 OK 00:00:10.669 Success: Status code 200 is in the accepted range: 200,404 00:00:10.670 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dca21ec0f3ec663fb113b85210d70609a04f98a9.tar.gz 00:00:22.561 [Pipeline] sh 00:00:22.847 + tar --no-same-owner -xf spdk_dca21ec0f3ec663fb113b85210d70609a04f98a9.tar.gz 00:00:25.390 [Pipeline] sh 00:00:25.714 + git -C spdk log --oneline -n5 00:00:25.714 dca21ec0f scripts/nvmf_perf: confirm set system settings 00:00:25.714 77f816207 scripts/nvmf_perf: modify set_pause_frames 00:00:25.714 81767f27c scripts/nvmf_perf: check all config file sections are present 00:00:25.714 166db62dc scripts/nvmf_perf: disable fio group reporting 00:00:25.714 dc3b3835d scripts/nvmf_perf: use dataclasses for collecting results data 00:00:25.724 [Pipeline] } 00:00:25.738 [Pipeline] // stage 00:00:25.744 [Pipeline] stage 00:00:25.746 [Pipeline] { (Prepare) 00:00:25.760 [Pipeline] writeFile 00:00:25.775 [Pipeline] sh 00:00:26.055 + logger -p user.info -t JENKINS-CI 00:00:26.065 [Pipeline] sh 00:00:26.346 + logger -p user.info -t JENKINS-CI 00:00:26.358 [Pipeline] sh 00:00:26.640 + cat autorun-spdk.conf 00:00:26.640 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.640 SPDK_TEST_NVMF=1 00:00:26.640 SPDK_TEST_NVME_CLI=1 00:00:26.640 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:26.640 SPDK_TEST_NVMF_NICS=e810 00:00:26.640 SPDK_TEST_VFIOUSER=1 00:00:26.640 SPDK_RUN_UBSAN=1 00:00:26.640 NET_TYPE=phy 00:00:26.646 RUN_NIGHTLY=0 00:00:26.650 [Pipeline] readFile 00:00:26.673 [Pipeline] withEnv 00:00:26.675 [Pipeline] { 00:00:26.686 [Pipeline] sh 00:00:26.967 + set -ex 00:00:26.967 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:26.967 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:26.967 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.967 ++ SPDK_TEST_NVMF=1 00:00:26.967 ++ SPDK_TEST_NVME_CLI=1 00:00:26.967 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:26.967 ++ SPDK_TEST_NVMF_NICS=e810 00:00:26.967 ++ SPDK_TEST_VFIOUSER=1 00:00:26.967 ++ SPDK_RUN_UBSAN=1 00:00:26.967 ++ NET_TYPE=phy 00:00:26.967 ++ RUN_NIGHTLY=0 00:00:26.967 + case $SPDK_TEST_NVMF_NICS in 00:00:26.967 + DRIVERS=ice 00:00:26.967 + [[ tcp == \r\d\m\a ]] 00:00:26.967 + [[ -n ice ]] 00:00:26.967 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:26.967 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:26.967 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:26.967 rmmod: ERROR: Module irdma is not currently loaded 00:00:26.967 rmmod: ERROR: Module i40iw is not currently loaded 00:00:26.967 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:26.967 + true 00:00:26.967 + for D in $DRIVERS 00:00:26.967 + sudo modprobe ice 00:00:26.967 + exit 0 00:00:26.976 [Pipeline] } 00:00:26.992 [Pipeline] // withEnv 00:00:26.996 [Pipeline] } 00:00:27.014 [Pipeline] // stage 00:00:27.023 [Pipeline] catchError 00:00:27.024 [Pipeline] { 00:00:27.038 [Pipeline] timeout 00:00:27.039 Timeout set to expire in 50 min 00:00:27.040 [Pipeline] { 00:00:27.054 [Pipeline] stage 00:00:27.056 [Pipeline] { (Tests) 00:00:27.069 [Pipeline] sh 00:00:27.347 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:27.347 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:27.347 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:27.347 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:27.347 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:27.347 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:27.347 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:27.347 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:27.347 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:27.347 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:27.347 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:27.347 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:27.347 + source /etc/os-release 00:00:27.347 ++ NAME='Fedora Linux' 00:00:27.347 ++ VERSION='38 (Cloud Edition)' 00:00:27.347 ++ ID=fedora 00:00:27.347 ++ VERSION_ID=38 00:00:27.347 ++ VERSION_CODENAME= 00:00:27.347 ++ PLATFORM_ID=platform:f38 00:00:27.347 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:27.347 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:27.347 ++ LOGO=fedora-logo-icon 00:00:27.347 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:27.347 ++ HOME_URL=https://fedoraproject.org/ 00:00:27.347 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:27.347 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:27.347 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:27.347 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:27.347 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:27.347 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:27.347 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:27.347 ++ SUPPORT_END=2024-05-14 00:00:27.347 ++ VARIANT='Cloud Edition' 00:00:27.347 ++ VARIANT_ID=cloud 00:00:27.347 + uname -a 00:00:27.347 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:27.347 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:29.891 Hugepages 00:00:29.891 node hugesize free / total 00:00:29.891 node0 1048576kB 0 / 0 00:00:29.891 node0 2048kB 0 / 0 00:00:29.891 node1 1048576kB 0 / 0 00:00:29.891 node1 2048kB 0 / 0 00:00:29.891 00:00:29.891 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:29.891 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:29.891 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:29.891 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:29.891 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:29.891 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:29.891 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:29.891 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:29.891 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:29.891 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:30.150 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:30.150 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:30.150 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:30.150 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:30.150 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:30.150 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:30.150 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:30.150 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:30.150 + rm -f /tmp/spdk-ld-path 00:00:30.150 + source autorun-spdk.conf 00:00:30.150 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.150 ++ SPDK_TEST_NVMF=1 00:00:30.150 ++ SPDK_TEST_NVME_CLI=1 00:00:30.150 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:30.150 ++ SPDK_TEST_NVMF_NICS=e810 00:00:30.150 ++ SPDK_TEST_VFIOUSER=1 00:00:30.150 ++ SPDK_RUN_UBSAN=1 00:00:30.150 ++ NET_TYPE=phy 00:00:30.150 ++ RUN_NIGHTLY=0 00:00:30.150 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:30.151 + [[ -n '' ]] 00:00:30.151 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:30.151 + for M in /var/spdk/build-*-manifest.txt 00:00:30.151 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:30.151 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:30.151 + for M in /var/spdk/build-*-manifest.txt 00:00:30.151 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:30.151 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:30.151 ++ uname 00:00:30.151 + [[ Linux == \L\i\n\u\x ]] 00:00:30.151 + sudo dmesg -T 00:00:30.151 + sudo dmesg --clear 00:00:30.151 + dmesg_pid=1222854 00:00:30.151 + [[ Fedora Linux == FreeBSD ]] 00:00:30.151 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:30.151 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:30.151 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:30.151 + [[ -x /usr/src/fio-static/fio ]] 00:00:30.151 + export FIO_BIN=/usr/src/fio-static/fio 00:00:30.151 + FIO_BIN=/usr/src/fio-static/fio 00:00:30.151 + sudo dmesg -Tw 00:00:30.151 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:30.151 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:30.151 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:30.151 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:30.151 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:30.151 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:30.151 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:30.151 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:30.151 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:30.151 Test configuration: 00:00:30.151 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.151 SPDK_TEST_NVMF=1 00:00:30.151 SPDK_TEST_NVME_CLI=1 00:00:30.151 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:30.151 SPDK_TEST_NVMF_NICS=e810 00:00:30.151 SPDK_TEST_VFIOUSER=1 00:00:30.151 SPDK_RUN_UBSAN=1 00:00:30.151 NET_TYPE=phy 00:00:30.410 RUN_NIGHTLY=0 19:01:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:30.410 19:01:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:30.410 19:01:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:30.410 19:01:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:30.410 19:01:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:30.410 19:01:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:30.410 19:01:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:30.410 19:01:16 -- paths/export.sh@5 -- $ export PATH 00:00:30.410 19:01:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:30.410 19:01:16 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:30.410 19:01:16 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:30.410 19:01:16 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721840476.XXXXXX 00:00:30.410 19:01:16 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721840476.vJBabU 00:00:30.410 19:01:16 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:30.410 19:01:16 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:30.410 19:01:16 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:30.410 19:01:16 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:30.410 19:01:16 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:30.410 19:01:16 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:30.410 19:01:16 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:30.410 19:01:16 -- common/autotest_common.sh@10 -- $ set +x 00:00:30.410 19:01:16 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:30.410 19:01:16 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:30.410 19:01:16 -- pm/common@17 -- $ local monitor 00:00:30.410 19:01:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:30.410 19:01:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:30.410 19:01:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:30.410 19:01:16 -- pm/common@21 -- $ date +%s 00:00:30.410 19:01:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:30.410 19:01:16 -- pm/common@21 -- $ date +%s 00:00:30.410 19:01:16 -- pm/common@25 -- $ sleep 1 00:00:30.410 19:01:16 -- pm/common@21 -- $ date +%s 00:00:30.410 19:01:16 -- pm/common@21 -- $ date +%s 00:00:30.411 19:01:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721840476 00:00:30.411 19:01:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721840476 00:00:30.411 19:01:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721840476 00:00:30.411 19:01:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721840476 00:00:30.411 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721840476_collect-vmstat.pm.log 00:00:30.411 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721840476_collect-cpu-load.pm.log 00:00:30.411 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721840476_collect-cpu-temp.pm.log 00:00:30.411 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721840476_collect-bmc-pm.bmc.pm.log 00:00:31.347 19:01:17 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:31.347 19:01:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:31.347 19:01:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:31.347 19:01:17 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:31.347 19:01:17 -- spdk/autobuild.sh@16 -- $ date -u 00:00:31.347 Wed Jul 24 05:01:17 PM UTC 2024 00:00:31.347 19:01:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:31.347 v24.09-pre-323-gdca21ec0f 00:00:31.347 19:01:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:31.347 19:01:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:31.347 19:01:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:31.347 19:01:17 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:31.347 19:01:17 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:31.347 19:01:17 -- common/autotest_common.sh@10 -- $ set +x 00:00:31.347 ************************************ 00:00:31.347 START TEST ubsan 00:00:31.347 ************************************ 00:00:31.347 19:01:17 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:31.347 using ubsan 00:00:31.347 00:00:31.347 real 0m0.001s 00:00:31.347 user 0m0.001s 00:00:31.347 sys 0m0.000s 00:00:31.347 19:01:17 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:31.347 19:01:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:31.347 ************************************ 00:00:31.347 END TEST ubsan 00:00:31.347 ************************************ 00:00:31.606 19:01:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:31.606 19:01:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:31.606 19:01:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:31.606 19:01:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:31.606 19:01:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:31.606 19:01:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:31.606 19:01:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:31.606 19:01:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:31.606 19:01:17 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:31.606 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:31.606 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:32.173 Using 'verbs' RDMA provider 00:00:44.955 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:58.182 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:58.442 Creating mk/config.mk...done. 00:00:58.442 Creating mk/cc.flags.mk...done. 00:00:58.442 Type 'make' to build. 00:00:58.442 19:01:44 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:00:58.442 19:01:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:58.442 19:01:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:58.442 19:01:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:58.442 ************************************ 00:00:58.442 START TEST make 00:00:58.442 ************************************ 00:00:58.442 19:01:44 make -- common/autotest_common.sh@1125 -- $ make -j112 00:00:59.012 make[1]: Nothing to be done for 'all'. 00:01:00.399 The Meson build system 00:01:00.399 Version: 1.3.1 00:01:00.399 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:00.399 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:00.399 Build type: native build 00:01:00.399 Project name: libvfio-user 00:01:00.399 Project version: 0.0.1 00:01:00.399 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:00.399 C linker for the host machine: cc ld.bfd 2.39-16 00:01:00.399 Host machine cpu family: x86_64 00:01:00.399 Host machine cpu: x86_64 00:01:00.399 Run-time dependency threads found: YES 00:01:00.399 Library dl found: YES 00:01:00.399 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:00.399 Run-time dependency json-c found: YES 0.17 00:01:00.399 Run-time dependency cmocka found: YES 1.1.7 00:01:00.399 Program pytest-3 found: NO 00:01:00.399 Program flake8 found: NO 00:01:00.399 Program misspell-fixer found: NO 00:01:00.399 Program restructuredtext-lint found: NO 00:01:00.399 Program valgrind found: YES (/usr/bin/valgrind) 00:01:00.400 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:00.400 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:00.400 Compiler for C supports arguments -Wwrite-strings: YES 00:01:00.400 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:00.400 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:00.400 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:00.400 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:00.400 Build targets in project: 8 00:01:00.400 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:00.400 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:00.400 00:01:00.400 libvfio-user 0.0.1 00:01:00.400 00:01:00.400 User defined options 00:01:00.400 buildtype : debug 00:01:00.400 default_library: shared 00:01:00.400 libdir : /usr/local/lib 00:01:00.400 00:01:00.400 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:00.658 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:00.658 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:00.658 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:00.658 [3/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:00.658 [4/37] Compiling C object samples/null.p/null.c.o 00:01:00.658 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:00.658 [6/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:00.658 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:00.658 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:00.658 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:00.658 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:00.658 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:00.658 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:00.658 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:00.658 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:00.658 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:00.658 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:00.916 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:00.916 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:00.916 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:00.916 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:00.916 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:00.916 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:00.916 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:00.916 [24/37] Compiling C object samples/server.p/server.c.o 00:01:00.916 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:00.916 [26/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:00.916 [27/37] Compiling C object samples/client.p/client.c.o 00:01:00.916 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:00.916 [29/37] Linking target samples/client 00:01:00.916 [30/37] Linking target test/unit_tests 00:01:00.916 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:00.916 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:01.174 [33/37] Linking target samples/lspci 00:01:01.174 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:01.174 [35/37] Linking target samples/server 00:01:01.174 [36/37] Linking target samples/null 00:01:01.174 [37/37] Linking target samples/gpio-pci-idio-16 00:01:01.174 INFO: autodetecting backend as ninja 00:01:01.174 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:01.174 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:01.433 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:01.433 ninja: no work to do. 00:01:06.707 The Meson build system 00:01:06.707 Version: 1.3.1 00:01:06.707 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:06.707 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:06.707 Build type: native build 00:01:06.707 Program cat found: YES (/usr/bin/cat) 00:01:06.707 Project name: DPDK 00:01:06.707 Project version: 24.03.0 00:01:06.707 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.707 C linker for the host machine: cc ld.bfd 2.39-16 00:01:06.707 Host machine cpu family: x86_64 00:01:06.707 Host machine cpu: x86_64 00:01:06.707 Message: ## Building in Developer Mode ## 00:01:06.707 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:06.707 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:06.707 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:06.707 Program python3 found: YES (/usr/bin/python3) 00:01:06.707 Program cat found: YES (/usr/bin/cat) 00:01:06.707 Compiler for C supports arguments -march=native: YES 00:01:06.707 Checking for size of "void *" : 8 00:01:06.707 Checking for size of "void *" : 8 (cached) 00:01:06.707 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:06.707 Library m found: YES 00:01:06.707 Library numa found: YES 00:01:06.707 Has header "numaif.h" : YES 00:01:06.707 Library fdt found: NO 00:01:06.707 Library execinfo found: NO 00:01:06.707 Has header "execinfo.h" : YES 00:01:06.707 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.707 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:06.707 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:06.707 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:06.707 Run-time dependency openssl found: YES 3.0.9 00:01:06.707 Run-time dependency libpcap found: YES 1.10.4 00:01:06.707 Has header "pcap.h" with dependency libpcap: YES 00:01:06.707 Compiler for C supports arguments -Wcast-qual: YES 00:01:06.707 Compiler for C supports arguments -Wdeprecated: YES 00:01:06.707 Compiler for C supports arguments -Wformat: YES 00:01:06.707 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:06.707 Compiler for C supports arguments -Wformat-security: NO 00:01:06.707 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.707 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:06.707 Compiler for C supports arguments -Wnested-externs: YES 00:01:06.707 Compiler for C supports arguments -Wold-style-definition: YES 00:01:06.707 Compiler for C supports arguments -Wpointer-arith: YES 00:01:06.707 Compiler for C supports arguments -Wsign-compare: YES 00:01:06.707 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:06.707 Compiler for C supports arguments -Wundef: YES 00:01:06.707 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.707 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:06.707 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:06.707 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.707 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:06.708 Program objdump found: YES (/usr/bin/objdump) 00:01:06.708 Compiler for C supports arguments -mavx512f: YES 00:01:06.708 Checking if "AVX512 checking" compiles: YES 00:01:06.708 Fetching value of define "__SSE4_2__" : 1 00:01:06.708 Fetching value of define "__AES__" : 1 00:01:06.708 Fetching value of define "__AVX__" : 1 00:01:06.708 Fetching value of define "__AVX2__" : 1 00:01:06.708 Fetching value of define "__AVX512BW__" : 1 00:01:06.708 Fetching value of define "__AVX512CD__" : 1 00:01:06.708 Fetching value of define "__AVX512DQ__" : 1 00:01:06.708 Fetching value of define "__AVX512F__" : 1 00:01:06.708 Fetching value of define "__AVX512VL__" : 1 00:01:06.708 Fetching value of define "__PCLMUL__" : 1 00:01:06.708 Fetching value of define "__RDRND__" : 1 00:01:06.708 Fetching value of define "__RDSEED__" : 1 00:01:06.708 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:06.708 Fetching value of define "__znver1__" : (undefined) 00:01:06.708 Fetching value of define "__znver2__" : (undefined) 00:01:06.708 Fetching value of define "__znver3__" : (undefined) 00:01:06.708 Fetching value of define "__znver4__" : (undefined) 00:01:06.708 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:06.708 Message: lib/log: Defining dependency "log" 00:01:06.708 Message: lib/kvargs: Defining dependency "kvargs" 00:01:06.708 Message: lib/telemetry: Defining dependency "telemetry" 00:01:06.708 Checking for function "getentropy" : NO 00:01:06.708 Message: lib/eal: Defining dependency "eal" 00:01:06.708 Message: lib/ring: Defining dependency "ring" 00:01:06.708 Message: lib/rcu: Defining dependency "rcu" 00:01:06.708 Message: lib/mempool: Defining dependency "mempool" 00:01:06.708 Message: lib/mbuf: Defining dependency "mbuf" 00:01:06.708 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:06.708 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:06.708 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:06.708 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:06.708 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:06.708 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:06.708 Compiler for C supports arguments -mpclmul: YES 00:01:06.708 Compiler for C supports arguments -maes: YES 00:01:06.708 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:06.708 Compiler for C supports arguments -mavx512bw: YES 00:01:06.708 Compiler for C supports arguments -mavx512dq: YES 00:01:06.708 Compiler for C supports arguments -mavx512vl: YES 00:01:06.708 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:06.708 Compiler for C supports arguments -mavx2: YES 00:01:06.708 Compiler for C supports arguments -mavx: YES 00:01:06.708 Message: lib/net: Defining dependency "net" 00:01:06.708 Message: lib/meter: Defining dependency "meter" 00:01:06.708 Message: lib/ethdev: Defining dependency "ethdev" 00:01:06.708 Message: lib/pci: Defining dependency "pci" 00:01:06.708 Message: lib/cmdline: Defining dependency "cmdline" 00:01:06.708 Message: lib/hash: Defining dependency "hash" 00:01:06.708 Message: lib/timer: Defining dependency "timer" 00:01:06.708 Message: lib/compressdev: Defining dependency "compressdev" 00:01:06.708 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:06.708 Message: lib/dmadev: Defining dependency "dmadev" 00:01:06.708 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:06.708 Message: lib/power: Defining dependency "power" 00:01:06.708 Message: lib/reorder: Defining dependency "reorder" 00:01:06.708 Message: lib/security: Defining dependency "security" 00:01:06.708 Has header "linux/userfaultfd.h" : YES 00:01:06.708 Has header "linux/vduse.h" : YES 00:01:06.708 Message: lib/vhost: Defining dependency "vhost" 00:01:06.708 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:06.708 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:06.708 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:06.708 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:06.708 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:06.708 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:06.708 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:06.708 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:06.708 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:06.708 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:06.708 Program doxygen found: YES (/usr/bin/doxygen) 00:01:06.708 Configuring doxy-api-html.conf using configuration 00:01:06.708 Configuring doxy-api-man.conf using configuration 00:01:06.708 Program mandb found: YES (/usr/bin/mandb) 00:01:06.708 Program sphinx-build found: NO 00:01:06.708 Configuring rte_build_config.h using configuration 00:01:06.708 Message: 00:01:06.708 ================= 00:01:06.708 Applications Enabled 00:01:06.708 ================= 00:01:06.708 00:01:06.708 apps: 00:01:06.708 00:01:06.708 00:01:06.708 Message: 00:01:06.708 ================= 00:01:06.708 Libraries Enabled 00:01:06.708 ================= 00:01:06.708 00:01:06.708 libs: 00:01:06.708 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:06.708 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:06.708 cryptodev, dmadev, power, reorder, security, vhost, 00:01:06.708 00:01:06.708 Message: 00:01:06.708 =============== 00:01:06.708 Drivers Enabled 00:01:06.708 =============== 00:01:06.708 00:01:06.708 common: 00:01:06.708 00:01:06.708 bus: 00:01:06.708 pci, vdev, 00:01:06.708 mempool: 00:01:06.708 ring, 00:01:06.708 dma: 00:01:06.708 00:01:06.708 net: 00:01:06.708 00:01:06.708 crypto: 00:01:06.708 00:01:06.708 compress: 00:01:06.708 00:01:06.708 vdpa: 00:01:06.708 00:01:06.708 00:01:06.708 Message: 00:01:06.708 ================= 00:01:06.708 Content Skipped 00:01:06.708 ================= 00:01:06.708 00:01:06.708 apps: 00:01:06.708 dumpcap: explicitly disabled via build config 00:01:06.708 graph: explicitly disabled via build config 00:01:06.708 pdump: explicitly disabled via build config 00:01:06.708 proc-info: explicitly disabled via build config 00:01:06.708 test-acl: explicitly disabled via build config 00:01:06.708 test-bbdev: explicitly disabled via build config 00:01:06.708 test-cmdline: explicitly disabled via build config 00:01:06.708 test-compress-perf: explicitly disabled via build config 00:01:06.708 test-crypto-perf: explicitly disabled via build config 00:01:06.708 test-dma-perf: explicitly disabled via build config 00:01:06.708 test-eventdev: explicitly disabled via build config 00:01:06.708 test-fib: explicitly disabled via build config 00:01:06.708 test-flow-perf: explicitly disabled via build config 00:01:06.708 test-gpudev: explicitly disabled via build config 00:01:06.708 test-mldev: explicitly disabled via build config 00:01:06.708 test-pipeline: explicitly disabled via build config 00:01:06.708 test-pmd: explicitly disabled via build config 00:01:06.708 test-regex: explicitly disabled via build config 00:01:06.708 test-sad: explicitly disabled via build config 00:01:06.708 test-security-perf: explicitly disabled via build config 00:01:06.708 00:01:06.708 libs: 00:01:06.708 argparse: explicitly disabled via build config 00:01:06.708 metrics: explicitly disabled via build config 00:01:06.708 acl: explicitly disabled via build config 00:01:06.708 bbdev: explicitly disabled via build config 00:01:06.708 bitratestats: explicitly disabled via build config 00:01:06.708 bpf: explicitly disabled via build config 00:01:06.708 cfgfile: explicitly disabled via build config 00:01:06.708 distributor: explicitly disabled via build config 00:01:06.708 efd: explicitly disabled via build config 00:01:06.708 eventdev: explicitly disabled via build config 00:01:06.708 dispatcher: explicitly disabled via build config 00:01:06.708 gpudev: explicitly disabled via build config 00:01:06.708 gro: explicitly disabled via build config 00:01:06.708 gso: explicitly disabled via build config 00:01:06.708 ip_frag: explicitly disabled via build config 00:01:06.709 jobstats: explicitly disabled via build config 00:01:06.709 latencystats: explicitly disabled via build config 00:01:06.709 lpm: explicitly disabled via build config 00:01:06.709 member: explicitly disabled via build config 00:01:06.709 pcapng: explicitly disabled via build config 00:01:06.709 rawdev: explicitly disabled via build config 00:01:06.709 regexdev: explicitly disabled via build config 00:01:06.709 mldev: explicitly disabled via build config 00:01:06.709 rib: explicitly disabled via build config 00:01:06.709 sched: explicitly disabled via build config 00:01:06.709 stack: explicitly disabled via build config 00:01:06.709 ipsec: explicitly disabled via build config 00:01:06.709 pdcp: explicitly disabled via build config 00:01:06.709 fib: explicitly disabled via build config 00:01:06.709 port: explicitly disabled via build config 00:01:06.709 pdump: explicitly disabled via build config 00:01:06.709 table: explicitly disabled via build config 00:01:06.709 pipeline: explicitly disabled via build config 00:01:06.709 graph: explicitly disabled via build config 00:01:06.709 node: explicitly disabled via build config 00:01:06.709 00:01:06.709 drivers: 00:01:06.709 common/cpt: not in enabled drivers build config 00:01:06.709 common/dpaax: not in enabled drivers build config 00:01:06.709 common/iavf: not in enabled drivers build config 00:01:06.709 common/idpf: not in enabled drivers build config 00:01:06.709 common/ionic: not in enabled drivers build config 00:01:06.709 common/mvep: not in enabled drivers build config 00:01:06.709 common/octeontx: not in enabled drivers build config 00:01:06.709 bus/auxiliary: not in enabled drivers build config 00:01:06.709 bus/cdx: not in enabled drivers build config 00:01:06.709 bus/dpaa: not in enabled drivers build config 00:01:06.709 bus/fslmc: not in enabled drivers build config 00:01:06.709 bus/ifpga: not in enabled drivers build config 00:01:06.709 bus/platform: not in enabled drivers build config 00:01:06.709 bus/uacce: not in enabled drivers build config 00:01:06.709 bus/vmbus: not in enabled drivers build config 00:01:06.709 common/cnxk: not in enabled drivers build config 00:01:06.709 common/mlx5: not in enabled drivers build config 00:01:06.709 common/nfp: not in enabled drivers build config 00:01:06.709 common/nitrox: not in enabled drivers build config 00:01:06.709 common/qat: not in enabled drivers build config 00:01:06.709 common/sfc_efx: not in enabled drivers build config 00:01:06.709 mempool/bucket: not in enabled drivers build config 00:01:06.709 mempool/cnxk: not in enabled drivers build config 00:01:06.709 mempool/dpaa: not in enabled drivers build config 00:01:06.709 mempool/dpaa2: not in enabled drivers build config 00:01:06.709 mempool/octeontx: not in enabled drivers build config 00:01:06.709 mempool/stack: not in enabled drivers build config 00:01:06.709 dma/cnxk: not in enabled drivers build config 00:01:06.709 dma/dpaa: not in enabled drivers build config 00:01:06.709 dma/dpaa2: not in enabled drivers build config 00:01:06.709 dma/hisilicon: not in enabled drivers build config 00:01:06.709 dma/idxd: not in enabled drivers build config 00:01:06.709 dma/ioat: not in enabled drivers build config 00:01:06.709 dma/skeleton: not in enabled drivers build config 00:01:06.709 net/af_packet: not in enabled drivers build config 00:01:06.709 net/af_xdp: not in enabled drivers build config 00:01:06.709 net/ark: not in enabled drivers build config 00:01:06.709 net/atlantic: not in enabled drivers build config 00:01:06.709 net/avp: not in enabled drivers build config 00:01:06.709 net/axgbe: not in enabled drivers build config 00:01:06.709 net/bnx2x: not in enabled drivers build config 00:01:06.709 net/bnxt: not in enabled drivers build config 00:01:06.709 net/bonding: not in enabled drivers build config 00:01:06.709 net/cnxk: not in enabled drivers build config 00:01:06.709 net/cpfl: not in enabled drivers build config 00:01:06.709 net/cxgbe: not in enabled drivers build config 00:01:06.709 net/dpaa: not in enabled drivers build config 00:01:06.709 net/dpaa2: not in enabled drivers build config 00:01:06.709 net/e1000: not in enabled drivers build config 00:01:06.709 net/ena: not in enabled drivers build config 00:01:06.709 net/enetc: not in enabled drivers build config 00:01:06.709 net/enetfec: not in enabled drivers build config 00:01:06.709 net/enic: not in enabled drivers build config 00:01:06.709 net/failsafe: not in enabled drivers build config 00:01:06.709 net/fm10k: not in enabled drivers build config 00:01:06.709 net/gve: not in enabled drivers build config 00:01:06.709 net/hinic: not in enabled drivers build config 00:01:06.709 net/hns3: not in enabled drivers build config 00:01:06.709 net/i40e: not in enabled drivers build config 00:01:06.709 net/iavf: not in enabled drivers build config 00:01:06.709 net/ice: not in enabled drivers build config 00:01:06.709 net/idpf: not in enabled drivers build config 00:01:06.709 net/igc: not in enabled drivers build config 00:01:06.709 net/ionic: not in enabled drivers build config 00:01:06.709 net/ipn3ke: not in enabled drivers build config 00:01:06.709 net/ixgbe: not in enabled drivers build config 00:01:06.709 net/mana: not in enabled drivers build config 00:01:06.709 net/memif: not in enabled drivers build config 00:01:06.709 net/mlx4: not in enabled drivers build config 00:01:06.709 net/mlx5: not in enabled drivers build config 00:01:06.709 net/mvneta: not in enabled drivers build config 00:01:06.709 net/mvpp2: not in enabled drivers build config 00:01:06.709 net/netvsc: not in enabled drivers build config 00:01:06.709 net/nfb: not in enabled drivers build config 00:01:06.709 net/nfp: not in enabled drivers build config 00:01:06.709 net/ngbe: not in enabled drivers build config 00:01:06.709 net/null: not in enabled drivers build config 00:01:06.709 net/octeontx: not in enabled drivers build config 00:01:06.709 net/octeon_ep: not in enabled drivers build config 00:01:06.709 net/pcap: not in enabled drivers build config 00:01:06.709 net/pfe: not in enabled drivers build config 00:01:06.709 net/qede: not in enabled drivers build config 00:01:06.709 net/ring: not in enabled drivers build config 00:01:06.709 net/sfc: not in enabled drivers build config 00:01:06.709 net/softnic: not in enabled drivers build config 00:01:06.709 net/tap: not in enabled drivers build config 00:01:06.709 net/thunderx: not in enabled drivers build config 00:01:06.709 net/txgbe: not in enabled drivers build config 00:01:06.709 net/vdev_netvsc: not in enabled drivers build config 00:01:06.709 net/vhost: not in enabled drivers build config 00:01:06.709 net/virtio: not in enabled drivers build config 00:01:06.709 net/vmxnet3: not in enabled drivers build config 00:01:06.709 raw/*: missing internal dependency, "rawdev" 00:01:06.709 crypto/armv8: not in enabled drivers build config 00:01:06.709 crypto/bcmfs: not in enabled drivers build config 00:01:06.709 crypto/caam_jr: not in enabled drivers build config 00:01:06.709 crypto/ccp: not in enabled drivers build config 00:01:06.709 crypto/cnxk: not in enabled drivers build config 00:01:06.709 crypto/dpaa_sec: not in enabled drivers build config 00:01:06.709 crypto/dpaa2_sec: not in enabled drivers build config 00:01:06.709 crypto/ipsec_mb: not in enabled drivers build config 00:01:06.709 crypto/mlx5: not in enabled drivers build config 00:01:06.709 crypto/mvsam: not in enabled drivers build config 00:01:06.709 crypto/nitrox: not in enabled drivers build config 00:01:06.709 crypto/null: not in enabled drivers build config 00:01:06.709 crypto/octeontx: not in enabled drivers build config 00:01:06.709 crypto/openssl: not in enabled drivers build config 00:01:06.709 crypto/scheduler: not in enabled drivers build config 00:01:06.709 crypto/uadk: not in enabled drivers build config 00:01:06.709 crypto/virtio: not in enabled drivers build config 00:01:06.709 compress/isal: not in enabled drivers build config 00:01:06.709 compress/mlx5: not in enabled drivers build config 00:01:06.709 compress/nitrox: not in enabled drivers build config 00:01:06.709 compress/octeontx: not in enabled drivers build config 00:01:06.709 compress/zlib: not in enabled drivers build config 00:01:06.709 regex/*: missing internal dependency, "regexdev" 00:01:06.709 ml/*: missing internal dependency, "mldev" 00:01:06.709 vdpa/ifc: not in enabled drivers build config 00:01:06.709 vdpa/mlx5: not in enabled drivers build config 00:01:06.710 vdpa/nfp: not in enabled drivers build config 00:01:06.710 vdpa/sfc: not in enabled drivers build config 00:01:06.710 event/*: missing internal dependency, "eventdev" 00:01:06.710 baseband/*: missing internal dependency, "bbdev" 00:01:06.710 gpu/*: missing internal dependency, "gpudev" 00:01:06.710 00:01:06.710 00:01:06.969 Build targets in project: 85 00:01:06.969 00:01:06.969 DPDK 24.03.0 00:01:06.969 00:01:06.969 User defined options 00:01:06.969 buildtype : debug 00:01:06.969 default_library : shared 00:01:06.969 libdir : lib 00:01:06.969 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:06.969 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:06.969 c_link_args : 00:01:06.969 cpu_instruction_set: native 00:01:06.969 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:06.969 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:06.969 enable_docs : false 00:01:06.969 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:06.969 enable_kmods : false 00:01:06.969 max_lcores : 128 00:01:06.969 tests : false 00:01:06.969 00:01:06.969 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:07.228 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:07.496 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:07.496 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:07.496 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:07.496 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:07.496 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:07.496 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:07.496 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:07.496 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:07.496 [9/268] Linking static target lib/librte_kvargs.a 00:01:07.496 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:07.496 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:07.496 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:07.496 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:07.496 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:07.496 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:07.496 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:07.496 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:07.496 [18/268] Linking static target lib/librte_log.a 00:01:07.760 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:07.760 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:07.760 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:07.760 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:07.760 [23/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:07.760 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:07.760 [25/268] Linking static target lib/librte_pci.a 00:01:07.760 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:07.760 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:07.760 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:07.760 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:07.760 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:07.760 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:07.760 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:07.760 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:07.760 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:08.021 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:08.021 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:08.021 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:08.021 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:08.021 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:08.021 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:08.021 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:08.021 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:08.021 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:08.021 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:08.021 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:08.021 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:08.021 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:08.021 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:08.021 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:08.021 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:08.021 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:08.021 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:08.021 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:08.021 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:08.021 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:08.021 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:08.021 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:08.021 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:08.021 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:08.021 [60/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:08.021 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:08.021 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:08.021 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:08.021 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:08.021 [65/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:08.021 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:08.021 [67/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:08.021 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:08.021 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:08.021 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:08.022 [71/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:08.022 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:08.022 [73/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:08.022 [74/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.022 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:08.022 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:08.022 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:08.022 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:08.022 [79/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.022 [80/268] Linking static target lib/librte_meter.a 00:01:08.022 [81/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:08.022 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:08.022 [83/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:08.022 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:08.022 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:08.022 [86/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:08.022 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:08.022 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:08.022 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:08.022 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:08.022 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:08.022 [92/268] Linking static target lib/librte_telemetry.a 00:01:08.280 [93/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:08.280 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:08.280 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:08.280 [96/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:08.280 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:08.280 [98/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:08.280 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:08.280 [100/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:08.280 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:08.280 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:08.280 [103/268] Linking static target lib/librte_ring.a 00:01:08.280 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:08.281 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:08.281 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:08.281 [107/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:08.281 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:08.281 [109/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:08.281 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:08.281 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:08.281 [112/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:08.281 [113/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:08.281 [114/268] Linking static target lib/librte_rcu.a 00:01:08.281 [115/268] Linking static target lib/librte_cmdline.a 00:01:08.281 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:08.281 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:08.281 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:08.281 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:08.281 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:08.281 [121/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:08.281 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:08.281 [123/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:08.281 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:08.281 [125/268] Linking static target lib/librte_timer.a 00:01:08.281 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:08.281 [127/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:08.281 [128/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:08.281 [129/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:08.281 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:08.281 [131/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:08.281 [132/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:08.281 [133/268] Linking static target lib/librte_mempool.a 00:01:08.281 [134/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:08.281 [135/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:08.281 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:08.281 [137/268] Linking static target lib/librte_net.a 00:01:08.281 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:08.281 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:08.281 [140/268] Linking static target lib/librte_eal.a 00:01:08.281 [141/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:08.281 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:08.281 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:08.281 [144/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:08.281 [145/268] Linking static target lib/librte_dmadev.a 00:01:08.281 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:08.281 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:08.281 [148/268] Linking static target lib/librte_compressdev.a 00:01:08.281 [149/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:08.281 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:08.281 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:08.281 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:08.281 [153/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.281 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:08.281 [155/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.281 [156/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:08.540 [157/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:08.540 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:08.540 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:08.540 [160/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:08.540 [161/268] Linking target lib/librte_log.so.24.1 00:01:08.540 [162/268] Linking static target lib/librte_mbuf.a 00:01:08.540 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:08.540 [164/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:08.540 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:08.540 [166/268] Linking static target lib/librte_power.a 00:01:08.540 [167/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.540 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:08.540 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:08.540 [170/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:08.540 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:08.540 [172/268] Linking static target lib/librte_reorder.a 00:01:08.540 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:08.540 [174/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:08.540 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:08.540 [176/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:08.540 [177/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.540 [178/268] Linking static target lib/librte_security.a 00:01:08.540 [179/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:08.540 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:08.540 [181/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.540 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:08.540 [183/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:08.540 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:08.540 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:08.540 [186/268] Linking target lib/librte_kvargs.so.24.1 00:01:08.540 [187/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:08.540 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:08.540 [189/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:08.540 [190/268] Linking static target lib/librte_hash.a 00:01:08.540 [191/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:08.799 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:08.799 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:08.799 [194/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.799 [195/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.799 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:08.799 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:08.799 [198/268] Linking target lib/librte_telemetry.so.24.1 00:01:08.799 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.799 [200/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:08.799 [201/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:08.799 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.799 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:08.799 [204/268] Linking static target lib/librte_cryptodev.a 00:01:08.799 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:08.799 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:08.799 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.799 [208/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:08.799 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.799 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.799 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.799 [212/268] Linking static target drivers/librte_mempool_ring.a 00:01:08.799 [213/268] Linking static target drivers/librte_bus_pci.a 00:01:09.057 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.057 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.057 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:09.057 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.057 [218/268] Linking static target lib/librte_ethdev.a 00:01:09.057 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.316 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.316 [221/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.316 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.316 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:09.575 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.575 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.833 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.833 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.093 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:10.093 [229/268] Linking static target lib/librte_vhost.a 00:01:11.115 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.495 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.062 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.970 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.229 [234/268] Linking target lib/librte_eal.so.24.1 00:01:21.229 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:21.229 [236/268] Linking target lib/librte_ring.so.24.1 00:01:21.229 [237/268] Linking target lib/librte_meter.so.24.1 00:01:21.229 [238/268] Linking target lib/librte_timer.so.24.1 00:01:21.229 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:21.229 [240/268] Linking target lib/librte_pci.so.24.1 00:01:21.229 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:21.488 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:21.488 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:21.488 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:21.488 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:21.488 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:21.488 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:21.488 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:21.488 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:21.488 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:21.747 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:21.747 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:21.747 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:21.747 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:21.747 [255/268] Linking target lib/librte_net.so.24.1 00:01:21.747 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:21.747 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:01:21.747 [258/268] Linking target lib/librte_reorder.so.24.1 00:01:22.006 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:22.006 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:22.006 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:22.006 [262/268] Linking target lib/librte_security.so.24.1 00:01:22.006 [263/268] Linking target lib/librte_hash.so.24.1 00:01:22.006 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:22.265 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:22.265 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:22.265 [267/268] Linking target lib/librte_power.so.24.1 00:01:22.265 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:22.265 INFO: autodetecting backend as ninja 00:01:22.265 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:23.204 CC lib/ut/ut.o 00:01:23.204 CC lib/ut_mock/mock.o 00:01:23.464 CC lib/log/log.o 00:01:23.464 CC lib/log/log_flags.o 00:01:23.464 CC lib/log/log_deprecated.o 00:01:23.464 LIB libspdk_ut.a 00:01:23.464 SO libspdk_ut.so.2.0 00:01:23.464 LIB libspdk_log.a 00:01:23.464 LIB libspdk_ut_mock.a 00:01:23.464 SYMLINK libspdk_ut.so 00:01:23.464 SO libspdk_ut_mock.so.6.0 00:01:23.464 SO libspdk_log.so.7.0 00:01:23.723 SYMLINK libspdk_ut_mock.so 00:01:23.723 SYMLINK libspdk_log.so 00:01:23.982 CXX lib/trace_parser/trace.o 00:01:23.982 CC lib/dma/dma.o 00:01:23.982 CC lib/util/base64.o 00:01:23.982 CC lib/util/bit_array.o 00:01:23.982 CC lib/ioat/ioat.o 00:01:23.982 CC lib/util/cpuset.o 00:01:23.982 CC lib/util/crc32c.o 00:01:23.982 CC lib/util/crc16.o 00:01:23.982 CC lib/util/crc32_ieee.o 00:01:23.982 CC lib/util/crc32.o 00:01:23.982 CC lib/util/crc64.o 00:01:23.982 CC lib/util/dif.o 00:01:23.982 CC lib/util/fd.o 00:01:23.982 CC lib/util/fd_group.o 00:01:23.982 CC lib/util/file.o 00:01:23.982 CC lib/util/hexlify.o 00:01:23.982 CC lib/util/iov.o 00:01:23.982 CC lib/util/math.o 00:01:23.982 CC lib/util/net.o 00:01:23.982 CC lib/util/pipe.o 00:01:23.982 CC lib/util/strerror_tls.o 00:01:23.982 CC lib/util/string.o 00:01:23.982 CC lib/util/zipf.o 00:01:23.982 CC lib/util/uuid.o 00:01:23.982 CC lib/util/xor.o 00:01:24.241 CC lib/vfio_user/host/vfio_user_pci.o 00:01:24.241 CC lib/vfio_user/host/vfio_user.o 00:01:24.241 LIB libspdk_dma.a 00:01:24.241 SO libspdk_dma.so.4.0 00:01:24.241 LIB libspdk_ioat.a 00:01:24.241 SYMLINK libspdk_dma.so 00:01:24.241 SO libspdk_ioat.so.7.0 00:01:24.241 LIB libspdk_vfio_user.a 00:01:24.241 SYMLINK libspdk_ioat.so 00:01:24.500 SO libspdk_vfio_user.so.5.0 00:01:24.500 LIB libspdk_util.a 00:01:24.500 SYMLINK libspdk_vfio_user.so 00:01:24.500 SO libspdk_util.so.10.0 00:01:24.500 SYMLINK libspdk_util.so 00:01:24.500 LIB libspdk_trace_parser.a 00:01:24.760 SO libspdk_trace_parser.so.5.0 00:01:24.760 SYMLINK libspdk_trace_parser.so 00:01:25.018 CC lib/rdma_utils/rdma_utils.o 00:01:25.018 CC lib/env_dpdk/env.o 00:01:25.018 CC lib/env_dpdk/memory.o 00:01:25.018 CC lib/rdma_provider/common.o 00:01:25.018 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:25.018 CC lib/env_dpdk/pci.o 00:01:25.018 CC lib/json/json_parse.o 00:01:25.018 CC lib/env_dpdk/init.o 00:01:25.018 CC lib/env_dpdk/threads.o 00:01:25.018 CC lib/json/json_util.o 00:01:25.018 CC lib/json/json_write.o 00:01:25.018 CC lib/env_dpdk/pci_ioat.o 00:01:25.018 CC lib/conf/conf.o 00:01:25.018 CC lib/env_dpdk/pci_virtio.o 00:01:25.018 CC lib/env_dpdk/pci_vmd.o 00:01:25.018 CC lib/env_dpdk/pci_idxd.o 00:01:25.018 CC lib/env_dpdk/pci_event.o 00:01:25.018 CC lib/env_dpdk/sigbus_handler.o 00:01:25.018 CC lib/env_dpdk/pci_dpdk.o 00:01:25.018 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:25.018 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:25.018 CC lib/idxd/idxd.o 00:01:25.018 CC lib/idxd/idxd_user.o 00:01:25.018 CC lib/idxd/idxd_kernel.o 00:01:25.018 CC lib/vmd/vmd.o 00:01:25.018 CC lib/vmd/led.o 00:01:25.277 LIB libspdk_rdma_provider.a 00:01:25.277 LIB libspdk_conf.a 00:01:25.277 LIB libspdk_rdma_utils.a 00:01:25.277 SO libspdk_rdma_provider.so.6.0 00:01:25.277 SO libspdk_conf.so.6.0 00:01:25.277 LIB libspdk_json.a 00:01:25.277 SO libspdk_rdma_utils.so.1.0 00:01:25.277 SYMLINK libspdk_rdma_provider.so 00:01:25.277 SYMLINK libspdk_conf.so 00:01:25.277 SO libspdk_json.so.6.0 00:01:25.277 SYMLINK libspdk_rdma_utils.so 00:01:25.277 SYMLINK libspdk_json.so 00:01:25.536 LIB libspdk_idxd.a 00:01:25.536 SO libspdk_idxd.so.12.0 00:01:25.536 LIB libspdk_vmd.a 00:01:25.536 SYMLINK libspdk_idxd.so 00:01:25.536 SO libspdk_vmd.so.6.0 00:01:25.536 SYMLINK libspdk_vmd.so 00:01:25.795 CC lib/jsonrpc/jsonrpc_server.o 00:01:25.795 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:25.795 CC lib/jsonrpc/jsonrpc_client.o 00:01:25.795 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:25.795 LIB libspdk_jsonrpc.a 00:01:26.054 SO libspdk_jsonrpc.so.6.0 00:01:26.054 LIB libspdk_env_dpdk.a 00:01:26.054 SYMLINK libspdk_jsonrpc.so 00:01:26.054 SO libspdk_env_dpdk.so.15.0 00:01:26.313 SYMLINK libspdk_env_dpdk.so 00:01:26.313 CC lib/rpc/rpc.o 00:01:26.572 LIB libspdk_rpc.a 00:01:26.572 SO libspdk_rpc.so.6.0 00:01:26.572 SYMLINK libspdk_rpc.so 00:01:27.140 CC lib/keyring/keyring.o 00:01:27.140 CC lib/keyring/keyring_rpc.o 00:01:27.140 CC lib/notify/notify.o 00:01:27.140 CC lib/notify/notify_rpc.o 00:01:27.140 CC lib/trace/trace.o 00:01:27.140 CC lib/trace/trace_rpc.o 00:01:27.140 CC lib/trace/trace_flags.o 00:01:27.140 LIB libspdk_notify.a 00:01:27.140 LIB libspdk_keyring.a 00:01:27.140 SO libspdk_notify.so.6.0 00:01:27.140 SO libspdk_keyring.so.1.0 00:01:27.399 LIB libspdk_trace.a 00:01:27.399 SYMLINK libspdk_notify.so 00:01:27.399 SO libspdk_trace.so.10.0 00:01:27.399 SYMLINK libspdk_keyring.so 00:01:27.399 SYMLINK libspdk_trace.so 00:01:27.657 CC lib/thread/thread.o 00:01:27.657 CC lib/thread/iobuf.o 00:01:27.657 CC lib/sock/sock.o 00:01:27.657 CC lib/sock/sock_rpc.o 00:01:27.946 LIB libspdk_sock.a 00:01:28.206 SO libspdk_sock.so.10.0 00:01:28.206 SYMLINK libspdk_sock.so 00:01:28.465 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:28.465 CC lib/nvme/nvme_ctrlr.o 00:01:28.465 CC lib/nvme/nvme_fabric.o 00:01:28.465 CC lib/nvme/nvme_ns_cmd.o 00:01:28.465 CC lib/nvme/nvme_ns.o 00:01:28.465 CC lib/nvme/nvme_pcie_common.o 00:01:28.465 CC lib/nvme/nvme_pcie.o 00:01:28.465 CC lib/nvme/nvme_qpair.o 00:01:28.465 CC lib/nvme/nvme.o 00:01:28.465 CC lib/nvme/nvme_quirks.o 00:01:28.465 CC lib/nvme/nvme_transport.o 00:01:28.465 CC lib/nvme/nvme_discovery.o 00:01:28.465 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:28.465 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:28.465 CC lib/nvme/nvme_tcp.o 00:01:28.465 CC lib/nvme/nvme_opal.o 00:01:28.465 CC lib/nvme/nvme_io_msg.o 00:01:28.465 CC lib/nvme/nvme_poll_group.o 00:01:28.465 CC lib/nvme/nvme_zns.o 00:01:28.465 CC lib/nvme/nvme_auth.o 00:01:28.465 CC lib/nvme/nvme_stubs.o 00:01:28.465 CC lib/nvme/nvme_cuse.o 00:01:28.465 CC lib/nvme/nvme_vfio_user.o 00:01:28.465 CC lib/nvme/nvme_rdma.o 00:01:28.723 LIB libspdk_thread.a 00:01:28.723 SO libspdk_thread.so.10.1 00:01:28.982 SYMLINK libspdk_thread.so 00:01:29.240 CC lib/init/json_config.o 00:01:29.240 CC lib/init/subsystem.o 00:01:29.240 CC lib/init/subsystem_rpc.o 00:01:29.240 CC lib/init/rpc.o 00:01:29.240 CC lib/accel/accel.o 00:01:29.240 CC lib/accel/accel_sw.o 00:01:29.240 CC lib/blob/blobstore.o 00:01:29.240 CC lib/accel/accel_rpc.o 00:01:29.240 CC lib/blob/request.o 00:01:29.240 CC lib/blob/zeroes.o 00:01:29.240 CC lib/vfu_tgt/tgt_endpoint.o 00:01:29.240 CC lib/blob/blob_bs_dev.o 00:01:29.240 CC lib/vfu_tgt/tgt_rpc.o 00:01:29.240 CC lib/virtio/virtio_vfio_user.o 00:01:29.240 CC lib/virtio/virtio.o 00:01:29.240 CC lib/virtio/virtio_vhost_user.o 00:01:29.240 CC lib/virtio/virtio_pci.o 00:01:29.498 LIB libspdk_init.a 00:01:29.498 LIB libspdk_vfu_tgt.a 00:01:29.498 SO libspdk_init.so.5.0 00:01:29.498 LIB libspdk_virtio.a 00:01:29.498 SO libspdk_vfu_tgt.so.3.0 00:01:29.498 SO libspdk_virtio.so.7.0 00:01:29.498 SYMLINK libspdk_init.so 00:01:29.498 SYMLINK libspdk_vfu_tgt.so 00:01:29.498 SYMLINK libspdk_virtio.so 00:01:29.757 CC lib/event/app.o 00:01:29.757 CC lib/event/reactor.o 00:01:29.757 CC lib/event/log_rpc.o 00:01:29.757 CC lib/event/app_rpc.o 00:01:29.757 CC lib/event/scheduler_static.o 00:01:30.016 LIB libspdk_accel.a 00:01:30.016 SO libspdk_accel.so.16.0 00:01:30.016 SYMLINK libspdk_accel.so 00:01:30.016 LIB libspdk_nvme.a 00:01:30.275 LIB libspdk_event.a 00:01:30.275 SO libspdk_event.so.14.0 00:01:30.275 SO libspdk_nvme.so.13.1 00:01:30.275 SYMLINK libspdk_event.so 00:01:30.534 CC lib/bdev/bdev.o 00:01:30.534 CC lib/bdev/bdev_zone.o 00:01:30.534 CC lib/bdev/part.o 00:01:30.534 CC lib/bdev/bdev_rpc.o 00:01:30.534 CC lib/bdev/scsi_nvme.o 00:01:30.534 SYMLINK libspdk_nvme.so 00:01:31.471 LIB libspdk_blob.a 00:01:31.471 SO libspdk_blob.so.11.0 00:01:31.472 SYMLINK libspdk_blob.so 00:01:31.738 CC lib/lvol/lvol.o 00:01:31.738 CC lib/blobfs/blobfs.o 00:01:31.738 CC lib/blobfs/tree.o 00:01:32.308 LIB libspdk_bdev.a 00:01:32.308 SO libspdk_bdev.so.16.0 00:01:32.308 SYMLINK libspdk_bdev.so 00:01:32.308 LIB libspdk_blobfs.a 00:01:32.308 LIB libspdk_lvol.a 00:01:32.308 SO libspdk_blobfs.so.10.0 00:01:32.308 SO libspdk_lvol.so.10.0 00:01:32.567 SYMLINK libspdk_blobfs.so 00:01:32.567 SYMLINK libspdk_lvol.so 00:01:32.567 CC lib/ublk/ublk.o 00:01:32.567 CC lib/scsi/dev.o 00:01:32.567 CC lib/ublk/ublk_rpc.o 00:01:32.567 CC lib/scsi/lun.o 00:01:32.567 CC lib/scsi/port.o 00:01:32.567 CC lib/scsi/scsi.o 00:01:32.567 CC lib/scsi/scsi_bdev.o 00:01:32.567 CC lib/nvmf/ctrlr.o 00:01:32.567 CC lib/scsi/scsi_pr.o 00:01:32.567 CC lib/nvmf/ctrlr_bdev.o 00:01:32.567 CC lib/scsi/scsi_rpc.o 00:01:32.567 CC lib/scsi/task.o 00:01:32.567 CC lib/nvmf/ctrlr_discovery.o 00:01:32.567 CC lib/nvmf/subsystem.o 00:01:32.567 CC lib/nvmf/nvmf_rpc.o 00:01:32.567 CC lib/nvmf/nvmf.o 00:01:32.567 CC lib/nbd/nbd.o 00:01:32.567 CC lib/nvmf/transport.o 00:01:32.567 CC lib/nbd/nbd_rpc.o 00:01:32.567 CC lib/nvmf/tcp.o 00:01:32.567 CC lib/nvmf/stubs.o 00:01:32.567 CC lib/nvmf/mdns_server.o 00:01:32.567 CC lib/nvmf/vfio_user.o 00:01:32.567 CC lib/nvmf/rdma.o 00:01:32.567 CC lib/nvmf/auth.o 00:01:32.567 CC lib/ftl/ftl_core.o 00:01:32.567 CC lib/ftl/ftl_init.o 00:01:32.567 CC lib/ftl/ftl_layout.o 00:01:32.567 CC lib/ftl/ftl_debug.o 00:01:32.825 CC lib/ftl/ftl_io.o 00:01:32.825 CC lib/ftl/ftl_sb.o 00:01:32.825 CC lib/ftl/ftl_l2p.o 00:01:32.825 CC lib/ftl/ftl_l2p_flat.o 00:01:32.825 CC lib/ftl/ftl_nv_cache.o 00:01:32.825 CC lib/ftl/ftl_band.o 00:01:32.825 CC lib/ftl/ftl_band_ops.o 00:01:32.825 CC lib/ftl/ftl_writer.o 00:01:32.825 CC lib/ftl/ftl_rq.o 00:01:32.825 CC lib/ftl/ftl_reloc.o 00:01:32.825 CC lib/ftl/ftl_l2p_cache.o 00:01:32.825 CC lib/ftl/ftl_p2l.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:32.825 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:32.825 CC lib/ftl/utils/ftl_conf.o 00:01:32.825 CC lib/ftl/utils/ftl_md.o 00:01:32.825 CC lib/ftl/utils/ftl_mempool.o 00:01:32.825 CC lib/ftl/utils/ftl_bitmap.o 00:01:32.825 CC lib/ftl/utils/ftl_property.o 00:01:32.825 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:32.825 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:32.825 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:32.825 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:32.825 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:32.825 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:32.825 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:32.825 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:32.825 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:32.825 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:32.825 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:32.825 CC lib/ftl/base/ftl_base_dev.o 00:01:32.825 CC lib/ftl/ftl_trace.o 00:01:32.825 CC lib/ftl/base/ftl_base_bdev.o 00:01:33.083 LIB libspdk_nbd.a 00:01:33.342 SO libspdk_nbd.so.7.0 00:01:33.342 LIB libspdk_scsi.a 00:01:33.342 SYMLINK libspdk_nbd.so 00:01:33.342 SO libspdk_scsi.so.9.0 00:01:33.342 LIB libspdk_ublk.a 00:01:33.342 SYMLINK libspdk_scsi.so 00:01:33.342 SO libspdk_ublk.so.3.0 00:01:33.600 SYMLINK libspdk_ublk.so 00:01:33.600 CC lib/vhost/vhost.o 00:01:33.600 CC lib/vhost/vhost_rpc.o 00:01:33.600 CC lib/vhost/vhost_scsi.o 00:01:33.600 CC lib/vhost/rte_vhost_user.o 00:01:33.600 CC lib/vhost/vhost_blk.o 00:01:33.600 LIB libspdk_ftl.a 00:01:33.858 CC lib/iscsi/conn.o 00:01:33.858 CC lib/iscsi/init_grp.o 00:01:33.858 CC lib/iscsi/iscsi.o 00:01:33.858 CC lib/iscsi/md5.o 00:01:33.858 CC lib/iscsi/param.o 00:01:33.858 CC lib/iscsi/portal_grp.o 00:01:33.858 CC lib/iscsi/tgt_node.o 00:01:33.858 CC lib/iscsi/iscsi_subsystem.o 00:01:33.858 CC lib/iscsi/iscsi_rpc.o 00:01:33.858 CC lib/iscsi/task.o 00:01:33.858 SO libspdk_ftl.so.9.0 00:01:34.116 SYMLINK libspdk_ftl.so 00:01:34.375 LIB libspdk_nvmf.a 00:01:34.375 SO libspdk_nvmf.so.19.0 00:01:34.634 LIB libspdk_vhost.a 00:01:34.634 SYMLINK libspdk_nvmf.so 00:01:34.634 SO libspdk_vhost.so.8.0 00:01:34.634 SYMLINK libspdk_vhost.so 00:01:34.634 LIB libspdk_iscsi.a 00:01:34.892 SO libspdk_iscsi.so.8.0 00:01:34.892 SYMLINK libspdk_iscsi.so 00:01:35.459 CC module/env_dpdk/env_dpdk_rpc.o 00:01:35.459 CC module/vfu_device/vfu_virtio.o 00:01:35.459 CC module/vfu_device/vfu_virtio_scsi.o 00:01:35.459 CC module/vfu_device/vfu_virtio_rpc.o 00:01:35.459 CC module/vfu_device/vfu_virtio_blk.o 00:01:35.718 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:35.718 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:35.718 CC module/scheduler/gscheduler/gscheduler.o 00:01:35.718 CC module/keyring/linux/keyring.o 00:01:35.718 LIB libspdk_env_dpdk_rpc.a 00:01:35.718 CC module/keyring/linux/keyring_rpc.o 00:01:35.718 CC module/accel/iaa/accel_iaa.o 00:01:35.718 CC module/accel/iaa/accel_iaa_rpc.o 00:01:35.718 CC module/accel/dsa/accel_dsa.o 00:01:35.718 CC module/accel/dsa/accel_dsa_rpc.o 00:01:35.718 CC module/keyring/file/keyring.o 00:01:35.718 CC module/keyring/file/keyring_rpc.o 00:01:35.718 CC module/blob/bdev/blob_bdev.o 00:01:35.718 CC module/accel/error/accel_error.o 00:01:35.718 CC module/accel/error/accel_error_rpc.o 00:01:35.718 CC module/accel/ioat/accel_ioat.o 00:01:35.718 CC module/sock/posix/posix.o 00:01:35.718 CC module/accel/ioat/accel_ioat_rpc.o 00:01:35.718 SO libspdk_env_dpdk_rpc.so.6.0 00:01:35.718 SYMLINK libspdk_env_dpdk_rpc.so 00:01:35.718 LIB libspdk_keyring_linux.a 00:01:35.718 LIB libspdk_scheduler_dpdk_governor.a 00:01:35.718 LIB libspdk_keyring_file.a 00:01:35.718 LIB libspdk_scheduler_dynamic.a 00:01:35.718 LIB libspdk_scheduler_gscheduler.a 00:01:35.718 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:35.718 SO libspdk_keyring_linux.so.1.0 00:01:35.718 LIB libspdk_accel_error.a 00:01:35.718 LIB libspdk_accel_iaa.a 00:01:35.718 SO libspdk_scheduler_gscheduler.so.4.0 00:01:35.718 SO libspdk_scheduler_dynamic.so.4.0 00:01:35.718 LIB libspdk_accel_ioat.a 00:01:35.718 SO libspdk_keyring_file.so.1.0 00:01:35.978 SO libspdk_accel_error.so.2.0 00:01:35.978 SO libspdk_accel_iaa.so.3.0 00:01:35.978 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:35.978 LIB libspdk_accel_dsa.a 00:01:35.978 LIB libspdk_blob_bdev.a 00:01:35.978 SO libspdk_accel_ioat.so.6.0 00:01:35.978 SYMLINK libspdk_keyring_linux.so 00:01:35.978 SYMLINK libspdk_scheduler_gscheduler.so 00:01:35.978 SYMLINK libspdk_scheduler_dynamic.so 00:01:35.978 SO libspdk_blob_bdev.so.11.0 00:01:35.978 SYMLINK libspdk_keyring_file.so 00:01:35.978 SO libspdk_accel_dsa.so.5.0 00:01:35.978 SYMLINK libspdk_accel_error.so 00:01:35.978 SYMLINK libspdk_accel_iaa.so 00:01:35.978 SYMLINK libspdk_accel_ioat.so 00:01:35.978 SYMLINK libspdk_blob_bdev.so 00:01:35.978 SYMLINK libspdk_accel_dsa.so 00:01:35.978 LIB libspdk_vfu_device.a 00:01:35.978 SO libspdk_vfu_device.so.3.0 00:01:36.236 SYMLINK libspdk_vfu_device.so 00:01:36.236 LIB libspdk_sock_posix.a 00:01:36.236 SO libspdk_sock_posix.so.6.0 00:01:36.236 SYMLINK libspdk_sock_posix.so 00:01:36.494 CC module/bdev/gpt/gpt.o 00:01:36.494 CC module/bdev/gpt/vbdev_gpt.o 00:01:36.494 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:36.494 CC module/bdev/error/vbdev_error_rpc.o 00:01:36.494 CC module/bdev/lvol/vbdev_lvol.o 00:01:36.494 CC module/bdev/error/vbdev_error.o 00:01:36.494 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:36.494 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:36.494 CC module/bdev/nvme/bdev_nvme.o 00:01:36.494 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:36.494 CC module/bdev/delay/vbdev_delay.o 00:01:36.494 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:36.494 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:36.494 CC module/bdev/nvme/bdev_mdns_client.o 00:01:36.494 CC module/blobfs/bdev/blobfs_bdev.o 00:01:36.494 CC module/bdev/nvme/nvme_rpc.o 00:01:36.494 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:36.494 CC module/bdev/split/vbdev_split.o 00:01:36.494 CC module/bdev/nvme/vbdev_opal.o 00:01:36.494 CC module/bdev/ftl/bdev_ftl.o 00:01:36.494 CC module/bdev/iscsi/bdev_iscsi.o 00:01:36.494 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:36.494 CC module/bdev/split/vbdev_split_rpc.o 00:01:36.494 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:36.494 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:36.494 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:36.494 CC module/bdev/raid/bdev_raid.o 00:01:36.494 CC module/bdev/raid/bdev_raid_sb.o 00:01:36.494 CC module/bdev/passthru/vbdev_passthru.o 00:01:36.494 CC module/bdev/raid/bdev_raid_rpc.o 00:01:36.494 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:36.494 CC module/bdev/raid/concat.o 00:01:36.494 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:36.494 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:36.494 CC module/bdev/raid/raid0.o 00:01:36.494 CC module/bdev/raid/raid1.o 00:01:36.494 CC module/bdev/malloc/bdev_malloc.o 00:01:36.494 CC module/bdev/null/bdev_null.o 00:01:36.494 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:36.494 CC module/bdev/null/bdev_null_rpc.o 00:01:36.494 CC module/bdev/aio/bdev_aio_rpc.o 00:01:36.494 CC module/bdev/aio/bdev_aio.o 00:01:36.753 LIB libspdk_blobfs_bdev.a 00:01:36.753 SO libspdk_blobfs_bdev.so.6.0 00:01:36.753 LIB libspdk_bdev_split.a 00:01:36.753 LIB libspdk_bdev_error.a 00:01:36.753 LIB libspdk_bdev_gpt.a 00:01:36.753 LIB libspdk_bdev_null.a 00:01:36.753 SO libspdk_bdev_split.so.6.0 00:01:36.753 SO libspdk_bdev_error.so.6.0 00:01:36.753 SYMLINK libspdk_blobfs_bdev.so 00:01:36.753 LIB libspdk_bdev_ftl.a 00:01:36.753 SO libspdk_bdev_gpt.so.6.0 00:01:36.753 SO libspdk_bdev_null.so.6.0 00:01:36.753 LIB libspdk_bdev_passthru.a 00:01:36.753 LIB libspdk_bdev_zone_block.a 00:01:36.753 SO libspdk_bdev_passthru.so.6.0 00:01:36.753 SO libspdk_bdev_ftl.so.6.0 00:01:36.753 SYMLINK libspdk_bdev_error.so 00:01:36.753 SYMLINK libspdk_bdev_split.so 00:01:36.753 LIB libspdk_bdev_aio.a 00:01:36.753 LIB libspdk_bdev_malloc.a 00:01:36.753 LIB libspdk_bdev_delay.a 00:01:36.753 LIB libspdk_bdev_iscsi.a 00:01:36.753 SYMLINK libspdk_bdev_gpt.so 00:01:37.012 SYMLINK libspdk_bdev_null.so 00:01:37.012 SO libspdk_bdev_zone_block.so.6.0 00:01:37.012 SO libspdk_bdev_aio.so.6.0 00:01:37.012 SO libspdk_bdev_delay.so.6.0 00:01:37.012 SYMLINK libspdk_bdev_passthru.so 00:01:37.012 SO libspdk_bdev_iscsi.so.6.0 00:01:37.012 SO libspdk_bdev_malloc.so.6.0 00:01:37.012 SYMLINK libspdk_bdev_ftl.so 00:01:37.012 LIB libspdk_bdev_lvol.a 00:01:37.012 LIB libspdk_bdev_virtio.a 00:01:37.012 SYMLINK libspdk_bdev_zone_block.so 00:01:37.012 SYMLINK libspdk_bdev_aio.so 00:01:37.012 SYMLINK libspdk_bdev_delay.so 00:01:37.012 SO libspdk_bdev_lvol.so.6.0 00:01:37.012 SYMLINK libspdk_bdev_iscsi.so 00:01:37.012 SYMLINK libspdk_bdev_malloc.so 00:01:37.012 SO libspdk_bdev_virtio.so.6.0 00:01:37.012 SYMLINK libspdk_bdev_lvol.so 00:01:37.012 SYMLINK libspdk_bdev_virtio.so 00:01:37.280 LIB libspdk_bdev_raid.a 00:01:37.280 SO libspdk_bdev_raid.so.6.0 00:01:37.542 SYMLINK libspdk_bdev_raid.so 00:01:38.109 LIB libspdk_bdev_nvme.a 00:01:38.109 SO libspdk_bdev_nvme.so.7.0 00:01:38.109 SYMLINK libspdk_bdev_nvme.so 00:01:39.047 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:39.047 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:39.047 CC module/event/subsystems/vmd/vmd.o 00:01:39.047 CC module/event/subsystems/iobuf/iobuf.o 00:01:39.047 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:39.047 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:39.047 CC module/event/subsystems/sock/sock.o 00:01:39.047 CC module/event/subsystems/keyring/keyring.o 00:01:39.047 CC module/event/subsystems/scheduler/scheduler.o 00:01:39.047 LIB libspdk_event_vhost_blk.a 00:01:39.047 LIB libspdk_event_vfu_tgt.a 00:01:39.047 LIB libspdk_event_iobuf.a 00:01:39.047 SO libspdk_event_vhost_blk.so.3.0 00:01:39.047 LIB libspdk_event_vmd.a 00:01:39.047 LIB libspdk_event_sock.a 00:01:39.047 LIB libspdk_event_keyring.a 00:01:39.047 SO libspdk_event_vfu_tgt.so.3.0 00:01:39.047 LIB libspdk_event_scheduler.a 00:01:39.047 SO libspdk_event_iobuf.so.3.0 00:01:39.047 SO libspdk_event_vmd.so.6.0 00:01:39.047 SO libspdk_event_keyring.so.1.0 00:01:39.047 SO libspdk_event_sock.so.5.0 00:01:39.047 SYMLINK libspdk_event_vhost_blk.so 00:01:39.047 SO libspdk_event_scheduler.so.4.0 00:01:39.047 SYMLINK libspdk_event_vfu_tgt.so 00:01:39.047 SYMLINK libspdk_event_iobuf.so 00:01:39.047 SYMLINK libspdk_event_keyring.so 00:01:39.047 SYMLINK libspdk_event_sock.so 00:01:39.047 SYMLINK libspdk_event_vmd.so 00:01:39.306 SYMLINK libspdk_event_scheduler.so 00:01:39.565 CC module/event/subsystems/accel/accel.o 00:01:39.565 LIB libspdk_event_accel.a 00:01:39.565 SO libspdk_event_accel.so.6.0 00:01:39.565 SYMLINK libspdk_event_accel.so 00:01:40.132 CC module/event/subsystems/bdev/bdev.o 00:01:40.132 LIB libspdk_event_bdev.a 00:01:40.132 SO libspdk_event_bdev.so.6.0 00:01:40.391 SYMLINK libspdk_event_bdev.so 00:01:40.648 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:40.648 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:40.648 CC module/event/subsystems/nbd/nbd.o 00:01:40.648 CC module/event/subsystems/ublk/ublk.o 00:01:40.648 CC module/event/subsystems/scsi/scsi.o 00:01:40.907 LIB libspdk_event_nbd.a 00:01:40.907 LIB libspdk_event_ublk.a 00:01:40.907 LIB libspdk_event_scsi.a 00:01:40.907 SO libspdk_event_nbd.so.6.0 00:01:40.907 LIB libspdk_event_nvmf.a 00:01:40.907 SO libspdk_event_ublk.so.3.0 00:01:40.907 SO libspdk_event_scsi.so.6.0 00:01:40.907 SYMLINK libspdk_event_nbd.so 00:01:40.907 SO libspdk_event_nvmf.so.6.0 00:01:40.907 SYMLINK libspdk_event_ublk.so 00:01:40.907 SYMLINK libspdk_event_scsi.so 00:01:40.907 SYMLINK libspdk_event_nvmf.so 00:01:41.206 CC module/event/subsystems/iscsi/iscsi.o 00:01:41.206 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:41.469 LIB libspdk_event_iscsi.a 00:01:41.469 SO libspdk_event_iscsi.so.6.0 00:01:41.469 LIB libspdk_event_vhost_scsi.a 00:01:41.469 SO libspdk_event_vhost_scsi.so.3.0 00:01:41.469 SYMLINK libspdk_event_iscsi.so 00:01:41.469 SYMLINK libspdk_event_vhost_scsi.so 00:01:41.728 SO libspdk.so.6.0 00:01:41.728 SYMLINK libspdk.so 00:01:41.986 CC test/rpc_client/rpc_client_test.o 00:01:42.257 TEST_HEADER include/spdk/accel.h 00:01:42.257 CC app/trace_record/trace_record.o 00:01:42.257 TEST_HEADER include/spdk/accel_module.h 00:01:42.257 TEST_HEADER include/spdk/barrier.h 00:01:42.257 TEST_HEADER include/spdk/assert.h 00:01:42.257 TEST_HEADER include/spdk/bdev_module.h 00:01:42.258 TEST_HEADER include/spdk/bdev_zone.h 00:01:42.258 TEST_HEADER include/spdk/base64.h 00:01:42.258 TEST_HEADER include/spdk/bdev.h 00:01:42.258 TEST_HEADER include/spdk/bit_pool.h 00:01:42.258 TEST_HEADER include/spdk/bit_array.h 00:01:42.258 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:42.258 CC app/spdk_top/spdk_top.o 00:01:42.258 TEST_HEADER include/spdk/blob_bdev.h 00:01:42.258 CC app/spdk_lspci/spdk_lspci.o 00:01:42.258 TEST_HEADER include/spdk/blobfs.h 00:01:42.258 TEST_HEADER include/spdk/conf.h 00:01:42.258 TEST_HEADER include/spdk/blob.h 00:01:42.258 CXX app/trace/trace.o 00:01:42.258 TEST_HEADER include/spdk/config.h 00:01:42.258 TEST_HEADER include/spdk/cpuset.h 00:01:42.258 TEST_HEADER include/spdk/crc16.h 00:01:42.258 CC app/spdk_nvme_discover/discovery_aer.o 00:01:42.258 TEST_HEADER include/spdk/crc64.h 00:01:42.258 TEST_HEADER include/spdk/dif.h 00:01:42.258 CC app/spdk_nvme_identify/identify.o 00:01:42.258 TEST_HEADER include/spdk/crc32.h 00:01:42.258 CC app/spdk_nvme_perf/perf.o 00:01:42.258 TEST_HEADER include/spdk/dma.h 00:01:42.258 TEST_HEADER include/spdk/env.h 00:01:42.258 TEST_HEADER include/spdk/endian.h 00:01:42.258 TEST_HEADER include/spdk/env_dpdk.h 00:01:42.258 TEST_HEADER include/spdk/event.h 00:01:42.258 TEST_HEADER include/spdk/fd_group.h 00:01:42.258 TEST_HEADER include/spdk/file.h 00:01:42.258 TEST_HEADER include/spdk/fd.h 00:01:42.258 TEST_HEADER include/spdk/ftl.h 00:01:42.258 TEST_HEADER include/spdk/hexlify.h 00:01:42.258 TEST_HEADER include/spdk/gpt_spec.h 00:01:42.258 TEST_HEADER include/spdk/histogram_data.h 00:01:42.258 TEST_HEADER include/spdk/idxd.h 00:01:42.258 TEST_HEADER include/spdk/idxd_spec.h 00:01:42.258 TEST_HEADER include/spdk/ioat.h 00:01:42.258 TEST_HEADER include/spdk/ioat_spec.h 00:01:42.258 TEST_HEADER include/spdk/init.h 00:01:42.258 TEST_HEADER include/spdk/iscsi_spec.h 00:01:42.258 TEST_HEADER include/spdk/jsonrpc.h 00:01:42.258 TEST_HEADER include/spdk/json.h 00:01:42.258 TEST_HEADER include/spdk/keyring.h 00:01:42.258 TEST_HEADER include/spdk/keyring_module.h 00:01:42.258 TEST_HEADER include/spdk/likely.h 00:01:42.258 CC app/spdk_dd/spdk_dd.o 00:01:42.258 TEST_HEADER include/spdk/log.h 00:01:42.258 TEST_HEADER include/spdk/lvol.h 00:01:42.258 TEST_HEADER include/spdk/memory.h 00:01:42.258 TEST_HEADER include/spdk/nbd.h 00:01:42.258 TEST_HEADER include/spdk/mmio.h 00:01:42.258 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:42.258 TEST_HEADER include/spdk/net.h 00:01:42.258 TEST_HEADER include/spdk/notify.h 00:01:42.258 TEST_HEADER include/spdk/nvme_intel.h 00:01:42.258 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:42.258 TEST_HEADER include/spdk/nvme.h 00:01:42.258 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:42.258 TEST_HEADER include/spdk/nvme_zns.h 00:01:42.258 TEST_HEADER include/spdk/nvme_spec.h 00:01:42.258 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:42.258 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:42.258 TEST_HEADER include/spdk/nvmf_spec.h 00:01:42.258 TEST_HEADER include/spdk/nvmf.h 00:01:42.258 CC app/nvmf_tgt/nvmf_main.o 00:01:42.258 TEST_HEADER include/spdk/nvmf_transport.h 00:01:42.258 TEST_HEADER include/spdk/pci_ids.h 00:01:42.258 TEST_HEADER include/spdk/opal.h 00:01:42.258 TEST_HEADER include/spdk/opal_spec.h 00:01:42.258 TEST_HEADER include/spdk/pipe.h 00:01:42.258 TEST_HEADER include/spdk/queue.h 00:01:42.258 TEST_HEADER include/spdk/reduce.h 00:01:42.258 TEST_HEADER include/spdk/rpc.h 00:01:42.258 CC app/iscsi_tgt/iscsi_tgt.o 00:01:42.258 TEST_HEADER include/spdk/scheduler.h 00:01:42.258 TEST_HEADER include/spdk/scsi_spec.h 00:01:42.258 TEST_HEADER include/spdk/sock.h 00:01:42.258 TEST_HEADER include/spdk/scsi.h 00:01:42.258 TEST_HEADER include/spdk/stdinc.h 00:01:42.258 TEST_HEADER include/spdk/string.h 00:01:42.258 TEST_HEADER include/spdk/thread.h 00:01:42.258 TEST_HEADER include/spdk/trace_parser.h 00:01:42.258 TEST_HEADER include/spdk/tree.h 00:01:42.258 TEST_HEADER include/spdk/trace.h 00:01:42.258 TEST_HEADER include/spdk/ublk.h 00:01:42.258 TEST_HEADER include/spdk/util.h 00:01:42.258 TEST_HEADER include/spdk/version.h 00:01:42.258 TEST_HEADER include/spdk/uuid.h 00:01:42.258 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:42.258 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:42.258 TEST_HEADER include/spdk/vhost.h 00:01:42.258 TEST_HEADER include/spdk/xor.h 00:01:42.258 TEST_HEADER include/spdk/vmd.h 00:01:42.258 TEST_HEADER include/spdk/zipf.h 00:01:42.258 CXX test/cpp_headers/accel.o 00:01:42.258 CXX test/cpp_headers/accel_module.o 00:01:42.258 CXX test/cpp_headers/assert.o 00:01:42.258 CXX test/cpp_headers/barrier.o 00:01:42.258 CXX test/cpp_headers/bdev_module.o 00:01:42.258 CXX test/cpp_headers/base64.o 00:01:42.258 CXX test/cpp_headers/bdev.o 00:01:42.258 CXX test/cpp_headers/bit_array.o 00:01:42.258 CXX test/cpp_headers/bdev_zone.o 00:01:42.258 CXX test/cpp_headers/bit_pool.o 00:01:42.258 CXX test/cpp_headers/blobfs_bdev.o 00:01:42.258 CXX test/cpp_headers/blob_bdev.o 00:01:42.258 CXX test/cpp_headers/blobfs.o 00:01:42.258 CXX test/cpp_headers/blob.o 00:01:42.258 CXX test/cpp_headers/cpuset.o 00:01:42.258 CXX test/cpp_headers/config.o 00:01:42.258 CXX test/cpp_headers/crc16.o 00:01:42.258 CXX test/cpp_headers/conf.o 00:01:42.258 CC app/spdk_tgt/spdk_tgt.o 00:01:42.258 CXX test/cpp_headers/crc32.o 00:01:42.258 CXX test/cpp_headers/crc64.o 00:01:42.258 CXX test/cpp_headers/dif.o 00:01:42.258 CXX test/cpp_headers/dma.o 00:01:42.258 CXX test/cpp_headers/endian.o 00:01:42.258 CXX test/cpp_headers/env_dpdk.o 00:01:42.258 CXX test/cpp_headers/env.o 00:01:42.258 CXX test/cpp_headers/event.o 00:01:42.258 CXX test/cpp_headers/fd.o 00:01:42.258 CXX test/cpp_headers/fd_group.o 00:01:42.258 CXX test/cpp_headers/file.o 00:01:42.258 CXX test/cpp_headers/ftl.o 00:01:42.258 CXX test/cpp_headers/gpt_spec.o 00:01:42.258 CXX test/cpp_headers/hexlify.o 00:01:42.258 CXX test/cpp_headers/histogram_data.o 00:01:42.258 CXX test/cpp_headers/idxd.o 00:01:42.258 CXX test/cpp_headers/idxd_spec.o 00:01:42.258 CXX test/cpp_headers/init.o 00:01:42.258 CXX test/cpp_headers/ioat_spec.o 00:01:42.258 CXX test/cpp_headers/ioat.o 00:01:42.258 CXX test/cpp_headers/iscsi_spec.o 00:01:42.258 CXX test/cpp_headers/keyring.o 00:01:42.258 CXX test/cpp_headers/jsonrpc.o 00:01:42.258 CXX test/cpp_headers/json.o 00:01:42.258 CXX test/cpp_headers/keyring_module.o 00:01:42.258 CXX test/cpp_headers/likely.o 00:01:42.258 CXX test/cpp_headers/log.o 00:01:42.258 CXX test/cpp_headers/memory.o 00:01:42.258 CXX test/cpp_headers/nbd.o 00:01:42.258 CXX test/cpp_headers/lvol.o 00:01:42.258 CXX test/cpp_headers/mmio.o 00:01:42.258 CXX test/cpp_headers/net.o 00:01:42.258 CXX test/cpp_headers/notify.o 00:01:42.258 CXX test/cpp_headers/nvme.o 00:01:42.258 CXX test/cpp_headers/nvme_intel.o 00:01:42.258 CXX test/cpp_headers/nvme_ocssd.o 00:01:42.258 CXX test/cpp_headers/nvme_spec.o 00:01:42.258 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:42.258 CXX test/cpp_headers/nvme_zns.o 00:01:42.258 CXX test/cpp_headers/nvmf_cmd.o 00:01:42.258 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:42.258 CXX test/cpp_headers/nvmf.o 00:01:42.258 CXX test/cpp_headers/nvmf_spec.o 00:01:42.258 CXX test/cpp_headers/nvmf_transport.o 00:01:42.258 CXX test/cpp_headers/opal.o 00:01:42.258 CXX test/cpp_headers/opal_spec.o 00:01:42.258 CXX test/cpp_headers/pipe.o 00:01:42.258 CXX test/cpp_headers/pci_ids.o 00:01:42.258 CXX test/cpp_headers/queue.o 00:01:42.258 CXX test/cpp_headers/reduce.o 00:01:42.258 CXX test/cpp_headers/rpc.o 00:01:42.258 CXX test/cpp_headers/scheduler.o 00:01:42.258 CXX test/cpp_headers/scsi.o 00:01:42.258 CXX test/cpp_headers/scsi_spec.o 00:01:42.258 CXX test/cpp_headers/sock.o 00:01:42.258 CXX test/cpp_headers/stdinc.o 00:01:42.258 CC test/thread/poller_perf/poller_perf.o 00:01:42.258 CXX test/cpp_headers/string.o 00:01:42.258 CC test/app/histogram_perf/histogram_perf.o 00:01:42.258 CXX test/cpp_headers/thread.o 00:01:42.258 CC examples/ioat/verify/verify.o 00:01:42.258 CXX test/cpp_headers/trace_parser.o 00:01:42.258 CXX test/cpp_headers/trace.o 00:01:42.258 CXX test/cpp_headers/tree.o 00:01:42.258 CXX test/cpp_headers/ublk.o 00:01:42.258 CXX test/cpp_headers/util.o 00:01:42.258 CC test/env/pci/pci_ut.o 00:01:42.258 CXX test/cpp_headers/uuid.o 00:01:42.258 CC examples/ioat/perf/perf.o 00:01:42.258 CC test/env/memory/memory_ut.o 00:01:42.258 CC examples/util/zipf/zipf.o 00:01:42.540 CC test/env/vtophys/vtophys.o 00:01:42.540 CC test/app/jsoncat/jsoncat.o 00:01:42.540 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:42.540 CC test/app/stub/stub.o 00:01:42.540 CC test/app/bdev_svc/bdev_svc.o 00:01:42.540 CC test/dma/test_dma/test_dma.o 00:01:42.540 CC app/fio/nvme/fio_plugin.o 00:01:42.540 LINK rpc_client_test 00:01:42.540 CXX test/cpp_headers/version.o 00:01:42.540 CXX test/cpp_headers/vfio_user_pci.o 00:01:42.540 CC app/fio/bdev/fio_plugin.o 00:01:42.806 CXX test/cpp_headers/vfio_user_spec.o 00:01:42.806 LINK spdk_lspci 00:01:42.806 LINK spdk_nvme_discover 00:01:42.806 LINK interrupt_tgt 00:01:42.806 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:42.806 CC test/env/mem_callbacks/mem_callbacks.o 00:01:42.806 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:42.806 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:43.065 LINK nvmf_tgt 00:01:43.065 LINK poller_perf 00:01:43.065 LINK histogram_perf 00:01:43.065 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:43.065 LINK jsoncat 00:01:43.065 CXX test/cpp_headers/vhost.o 00:01:43.065 CXX test/cpp_headers/vmd.o 00:01:43.065 LINK spdk_trace_record 00:01:43.065 CXX test/cpp_headers/xor.o 00:01:43.065 CXX test/cpp_headers/zipf.o 00:01:43.065 LINK iscsi_tgt 00:01:43.065 LINK vtophys 00:01:43.065 LINK zipf 00:01:43.065 LINK stub 00:01:43.065 LINK verify 00:01:43.065 LINK bdev_svc 00:01:43.065 LINK env_dpdk_post_init 00:01:43.065 LINK spdk_dd 00:01:43.065 LINK spdk_tgt 00:01:43.065 LINK ioat_perf 00:01:43.324 LINK spdk_trace 00:01:43.324 LINK test_dma 00:01:43.324 LINK pci_ut 00:01:43.324 LINK vhost_fuzz 00:01:43.324 LINK spdk_nvme 00:01:43.583 LINK nvme_fuzz 00:01:43.583 LINK spdk_bdev 00:01:43.583 LINK spdk_nvme_perf 00:01:43.583 LINK spdk_nvme_identify 00:01:43.583 LINK spdk_top 00:01:43.583 CC examples/idxd/perf/perf.o 00:01:43.583 CC examples/vmd/lsvmd/lsvmd.o 00:01:43.583 CC examples/vmd/led/led.o 00:01:43.583 LINK mem_callbacks 00:01:43.583 CC examples/sock/hello_world/hello_sock.o 00:01:43.583 CC test/event/reactor/reactor.o 00:01:43.583 CC app/vhost/vhost.o 00:01:43.583 CC test/event/reactor_perf/reactor_perf.o 00:01:43.583 CC test/event/event_perf/event_perf.o 00:01:43.583 CC test/event/app_repeat/app_repeat.o 00:01:43.583 CC examples/thread/thread/thread_ex.o 00:01:43.583 CC test/event/scheduler/scheduler.o 00:01:43.842 LINK lsvmd 00:01:43.842 LINK led 00:01:43.842 CC test/nvme/err_injection/err_injection.o 00:01:43.842 CC test/nvme/connect_stress/connect_stress.o 00:01:43.842 LINK reactor 00:01:43.842 LINK reactor_perf 00:01:43.842 CC test/nvme/e2edp/nvme_dp.o 00:01:43.842 CC test/nvme/sgl/sgl.o 00:01:43.842 CC test/nvme/fdp/fdp.o 00:01:43.842 CC test/nvme/reserve/reserve.o 00:01:43.842 CC test/nvme/aer/aer.o 00:01:43.842 CC test/nvme/fused_ordering/fused_ordering.o 00:01:43.842 LINK event_perf 00:01:43.842 CC test/nvme/cuse/cuse.o 00:01:43.842 CC test/nvme/compliance/nvme_compliance.o 00:01:43.842 CC test/nvme/simple_copy/simple_copy.o 00:01:43.842 CC test/nvme/boot_partition/boot_partition.o 00:01:43.842 LINK memory_ut 00:01:43.842 CC test/nvme/overhead/overhead.o 00:01:43.842 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:43.842 CC test/nvme/startup/startup.o 00:01:43.842 CC test/nvme/reset/reset.o 00:01:43.842 LINK vhost 00:01:43.842 CC test/blobfs/mkfs/mkfs.o 00:01:43.842 LINK app_repeat 00:01:43.842 LINK hello_sock 00:01:43.842 CC test/accel/dif/dif.o 00:01:43.842 LINK idxd_perf 00:01:43.842 LINK scheduler 00:01:43.842 LINK thread 00:01:43.842 CC test/lvol/esnap/esnap.o 00:01:44.100 LINK err_injection 00:01:44.100 LINK boot_partition 00:01:44.100 LINK connect_stress 00:01:44.100 LINK reserve 00:01:44.100 LINK startup 00:01:44.100 LINK doorbell_aers 00:01:44.100 LINK fused_ordering 00:01:44.100 LINK nvme_dp 00:01:44.100 LINK mkfs 00:01:44.100 LINK simple_copy 00:01:44.100 LINK aer 00:01:44.100 LINK reset 00:01:44.100 LINK sgl 00:01:44.100 LINK nvme_compliance 00:01:44.100 LINK overhead 00:01:44.100 LINK fdp 00:01:44.358 LINK dif 00:01:44.358 LINK iscsi_fuzz 00:01:44.358 CC examples/nvme/hello_world/hello_world.o 00:01:44.358 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:44.358 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:44.358 CC examples/nvme/arbitration/arbitration.o 00:01:44.358 CC examples/nvme/reconnect/reconnect.o 00:01:44.358 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:44.358 CC examples/nvme/abort/abort.o 00:01:44.358 CC examples/nvme/hotplug/hotplug.o 00:01:44.358 CC examples/accel/perf/accel_perf.o 00:01:44.616 CC examples/blob/hello_world/hello_blob.o 00:01:44.616 CC examples/blob/cli/blobcli.o 00:01:44.616 LINK pmr_persistence 00:01:44.616 LINK cmb_copy 00:01:44.616 LINK hello_world 00:01:44.616 LINK hotplug 00:01:44.616 LINK reconnect 00:01:44.616 LINK arbitration 00:01:44.616 LINK abort 00:01:44.616 LINK nvme_manage 00:01:44.616 LINK hello_blob 00:01:44.874 CC test/bdev/bdevio/bdevio.o 00:01:44.874 LINK cuse 00:01:44.874 LINK accel_perf 00:01:44.874 LINK blobcli 00:01:45.132 LINK bdevio 00:01:45.391 CC examples/bdev/bdevperf/bdevperf.o 00:01:45.391 CC examples/bdev/hello_world/hello_bdev.o 00:01:45.649 LINK hello_bdev 00:01:45.907 LINK bdevperf 00:01:46.474 CC examples/nvmf/nvmf/nvmf.o 00:01:46.732 LINK nvmf 00:01:47.298 LINK esnap 00:01:47.556 00:01:47.556 real 0m49.073s 00:01:47.556 user 6m27.111s 00:01:47.556 sys 4m13.139s 00:01:47.556 19:02:33 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:47.556 19:02:33 make -- common/autotest_common.sh@10 -- $ set +x 00:01:47.556 ************************************ 00:01:47.556 END TEST make 00:01:47.556 ************************************ 00:01:47.815 19:02:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:47.815 19:02:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:47.815 19:02:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:47.815 19:02:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.815 19:02:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:47.815 19:02:33 -- pm/common@44 -- $ pid=1222889 00:01:47.815 19:02:33 -- pm/common@50 -- $ kill -TERM 1222889 00:01:47.815 19:02:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.815 19:02:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:47.815 19:02:33 -- pm/common@44 -- $ pid=1222891 00:01:47.815 19:02:33 -- pm/common@50 -- $ kill -TERM 1222891 00:01:47.815 19:02:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.815 19:02:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:47.815 19:02:33 -- pm/common@44 -- $ pid=1222893 00:01:47.815 19:02:33 -- pm/common@50 -- $ kill -TERM 1222893 00:01:47.815 19:02:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.815 19:02:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:47.815 19:02:33 -- pm/common@44 -- $ pid=1222912 00:01:47.815 19:02:33 -- pm/common@50 -- $ sudo -E kill -TERM 1222912 00:01:47.815 19:02:33 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:47.815 19:02:33 -- nvmf/common.sh@7 -- # uname -s 00:01:47.815 19:02:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:47.815 19:02:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:47.815 19:02:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:47.815 19:02:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:47.815 19:02:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:47.815 19:02:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:47.815 19:02:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:47.815 19:02:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:47.815 19:02:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:47.815 19:02:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:47.815 19:02:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:01:47.815 19:02:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:01:47.815 19:02:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:47.815 19:02:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:47.815 19:02:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:47.815 19:02:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:47.815 19:02:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:47.815 19:02:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:47.815 19:02:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.815 19:02:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.815 19:02:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.815 19:02:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.815 19:02:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.815 19:02:33 -- paths/export.sh@5 -- # export PATH 00:01:47.815 19:02:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.815 19:02:33 -- nvmf/common.sh@47 -- # : 0 00:01:47.815 19:02:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:47.815 19:02:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:47.815 19:02:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:47.815 19:02:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:47.815 19:02:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:47.815 19:02:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:47.815 19:02:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:47.815 19:02:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:47.815 19:02:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:47.815 19:02:33 -- spdk/autotest.sh@32 -- # uname -s 00:01:47.815 19:02:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:47.815 19:02:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:47.815 19:02:33 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:47.815 19:02:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:47.815 19:02:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:47.815 19:02:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:47.815 19:02:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:47.815 19:02:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:47.815 19:02:33 -- spdk/autotest.sh@48 -- # udevadm_pid=1283693 00:01:47.815 19:02:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:47.815 19:02:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:47.815 19:02:33 -- pm/common@17 -- # local monitor 00:01:47.815 19:02:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.815 19:02:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.815 19:02:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.815 19:02:33 -- pm/common@21 -- # date +%s 00:01:47.815 19:02:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.815 19:02:33 -- pm/common@21 -- # date +%s 00:01:47.815 19:02:33 -- pm/common@25 -- # sleep 1 00:01:47.815 19:02:33 -- pm/common@21 -- # date +%s 00:01:47.815 19:02:33 -- pm/common@21 -- # date +%s 00:01:47.815 19:02:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721840553 00:01:47.815 19:02:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721840553 00:01:47.815 19:02:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721840553 00:01:47.815 19:02:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721840553 00:01:47.815 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721840553_collect-cpu-load.pm.log 00:01:47.815 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721840553_collect-vmstat.pm.log 00:01:47.815 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721840553_collect-cpu-temp.pm.log 00:01:48.073 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721840553_collect-bmc-pm.bmc.pm.log 00:01:49.008 19:02:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:49.008 19:02:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:49.008 19:02:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:01:49.008 19:02:34 -- common/autotest_common.sh@10 -- # set +x 00:01:49.008 19:02:35 -- spdk/autotest.sh@59 -- # create_test_list 00:01:49.008 19:02:35 -- common/autotest_common.sh@748 -- # xtrace_disable 00:01:49.008 19:02:35 -- common/autotest_common.sh@10 -- # set +x 00:01:49.008 19:02:35 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:49.008 19:02:35 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.008 19:02:35 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.008 19:02:35 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:49.008 19:02:35 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.008 19:02:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:49.008 19:02:35 -- common/autotest_common.sh@1455 -- # uname 00:01:49.008 19:02:35 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:49.008 19:02:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:49.008 19:02:35 -- common/autotest_common.sh@1475 -- # uname 00:01:49.008 19:02:35 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:49.008 19:02:35 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:49.008 19:02:35 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:49.008 19:02:35 -- spdk/autotest.sh@72 -- # hash lcov 00:01:49.008 19:02:35 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:49.008 19:02:35 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:49.008 --rc lcov_branch_coverage=1 00:01:49.008 --rc lcov_function_coverage=1 00:01:49.008 --rc genhtml_branch_coverage=1 00:01:49.008 --rc genhtml_function_coverage=1 00:01:49.008 --rc genhtml_legend=1 00:01:49.008 --rc geninfo_all_blocks=1 00:01:49.008 ' 00:01:49.008 19:02:35 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:49.008 --rc lcov_branch_coverage=1 00:01:49.008 --rc lcov_function_coverage=1 00:01:49.008 --rc genhtml_branch_coverage=1 00:01:49.008 --rc genhtml_function_coverage=1 00:01:49.008 --rc genhtml_legend=1 00:01:49.008 --rc geninfo_all_blocks=1 00:01:49.008 ' 00:01:49.008 19:02:35 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:49.008 --rc lcov_branch_coverage=1 00:01:49.008 --rc lcov_function_coverage=1 00:01:49.008 --rc genhtml_branch_coverage=1 00:01:49.008 --rc genhtml_function_coverage=1 00:01:49.008 --rc genhtml_legend=1 00:01:49.008 --rc geninfo_all_blocks=1 00:01:49.008 --no-external' 00:01:49.008 19:02:35 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:49.008 --rc lcov_branch_coverage=1 00:01:49.008 --rc lcov_function_coverage=1 00:01:49.008 --rc genhtml_branch_coverage=1 00:01:49.008 --rc genhtml_function_coverage=1 00:01:49.008 --rc genhtml_legend=1 00:01:49.008 --rc geninfo_all_blocks=1 00:01:49.008 --no-external' 00:01:49.008 19:02:35 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:49.008 lcov: LCOV version 1.14 00:01:49.008 19:02:35 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:50.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:50.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:50.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:50.643 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:50.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:51.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:51.162 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:51.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:51.162 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:51.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:51.162 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:51.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:51.162 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:01:51.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:51.162 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:03.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:03.377 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:15.593 19:02:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:15.593 19:02:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:15.593 19:02:59 -- common/autotest_common.sh@10 -- # set +x 00:02:15.593 19:02:59 -- spdk/autotest.sh@91 -- # rm -f 00:02:15.593 19:02:59 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:16.966 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:17.223 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:17.223 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:17.223 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:17.223 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:17.223 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:17.223 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:17.223 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:17.223 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:17.481 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:17.481 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:17.481 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:17.481 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:17.481 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:17.481 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:17.481 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:17.481 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:17.481 19:03:03 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:17.481 19:03:03 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:17.481 19:03:03 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:17.481 19:03:03 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:17.481 19:03:03 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:17.481 19:03:03 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:17.481 19:03:03 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:17.481 19:03:03 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:17.481 19:03:03 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:17.481 19:03:03 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:17.481 19:03:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:17.481 19:03:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:17.481 19:03:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:17.481 19:03:03 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:17.481 19:03:03 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:17.738 No valid GPT data, bailing 00:02:17.738 19:03:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:17.738 19:03:03 -- scripts/common.sh@391 -- # pt= 00:02:17.738 19:03:03 -- scripts/common.sh@392 -- # return 1 00:02:17.738 19:03:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:17.738 1+0 records in 00:02:17.738 1+0 records out 00:02:17.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00640071 s, 164 MB/s 00:02:17.738 19:03:03 -- spdk/autotest.sh@118 -- # sync 00:02:17.738 19:03:03 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:17.738 19:03:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:17.738 19:03:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:25.846 19:03:10 -- spdk/autotest.sh@124 -- # uname -s 00:02:25.846 19:03:10 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:25.846 19:03:10 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:25.846 19:03:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:25.846 19:03:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:25.846 19:03:10 -- common/autotest_common.sh@10 -- # set +x 00:02:25.846 ************************************ 00:02:25.846 START TEST setup.sh 00:02:25.846 ************************************ 00:02:25.846 19:03:10 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:25.846 * Looking for test storage... 00:02:25.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:25.846 19:03:11 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:25.846 19:03:11 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:25.846 19:03:11 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:25.846 19:03:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:25.846 19:03:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:25.846 19:03:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:25.846 ************************************ 00:02:25.846 START TEST acl 00:02:25.846 ************************************ 00:02:25.846 19:03:11 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:25.846 * Looking for test storage... 00:02:25.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:25.846 19:03:11 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:25.846 19:03:11 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:25.846 19:03:11 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:25.846 19:03:11 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:25.846 19:03:11 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:25.846 19:03:11 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:25.846 19:03:11 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:25.846 19:03:11 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:25.846 19:03:11 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:25.846 19:03:11 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:25.847 19:03:11 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:25.847 19:03:11 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:25.847 19:03:11 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:25.847 19:03:11 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:25.847 19:03:11 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:25.847 19:03:11 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:29.123 19:03:14 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:29.123 19:03:14 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:29.123 19:03:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.123 19:03:14 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:29.123 19:03:14 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.123 19:03:14 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:31.645 Hugepages 00:02:31.645 node hugesize free / total 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 00:02:31.645 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:31.645 19:03:17 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:31.645 19:03:17 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:31.645 19:03:17 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:31.645 19:03:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:31.645 ************************************ 00:02:31.645 START TEST denied 00:02:31.645 ************************************ 00:02:31.645 19:03:17 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:02:31.645 19:03:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:31.645 19:03:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:31.645 19:03:17 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:31.645 19:03:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:31.645 19:03:17 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:34.981 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:34.981 19:03:20 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:34.981 19:03:20 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:34.981 19:03:20 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:34.981 19:03:20 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:34.981 19:03:20 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:34.981 19:03:20 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:34.981 19:03:20 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:34.981 19:03:20 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:34.981 19:03:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:34.981 19:03:20 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:39.172 00:02:39.172 real 0m7.512s 00:02:39.172 user 0m2.253s 00:02:39.172 sys 0m4.536s 00:02:39.172 19:03:25 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:39.172 19:03:25 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:39.172 ************************************ 00:02:39.172 END TEST denied 00:02:39.172 ************************************ 00:02:39.172 19:03:25 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:39.172 19:03:25 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:39.172 19:03:25 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:39.172 19:03:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:39.172 ************************************ 00:02:39.172 START TEST allowed 00:02:39.172 ************************************ 00:02:39.172 19:03:25 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:02:39.172 19:03:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:39.172 19:03:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:39.172 19:03:25 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:39.172 19:03:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.172 19:03:25 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:44.443 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:44.443 19:03:29 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:44.443 19:03:29 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:44.443 19:03:29 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:44.443 19:03:29 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.443 19:03:29 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.734 00:02:47.734 real 0m7.956s 00:02:47.734 user 0m1.988s 00:02:47.734 sys 0m4.347s 00:02:47.734 19:03:33 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:47.734 19:03:33 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:47.734 ************************************ 00:02:47.734 END TEST allowed 00:02:47.734 ************************************ 00:02:47.734 00:02:47.734 real 0m22.162s 00:02:47.734 user 0m6.454s 00:02:47.734 sys 0m13.385s 00:02:47.734 19:03:33 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:47.734 19:03:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:47.734 ************************************ 00:02:47.734 END TEST acl 00:02:47.734 ************************************ 00:02:47.734 19:03:33 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:47.734 19:03:33 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:47.734 19:03:33 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:47.734 19:03:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:47.734 ************************************ 00:02:47.734 START TEST hugepages 00:02:47.734 ************************************ 00:02:47.734 19:03:33 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:47.734 * Looking for test storage... 00:02:47.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 41695080 kB' 'MemAvailable: 45593996 kB' 'Buffers: 2704 kB' 'Cached: 10326408 kB' 'SwapCached: 0 kB' 'Active: 7179680 kB' 'Inactive: 3676148 kB' 'Active(anon): 6789816 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530104 kB' 'Mapped: 189632 kB' 'Shmem: 6263100 kB' 'KReclaimable: 480396 kB' 'Slab: 1104084 kB' 'SReclaimable: 480396 kB' 'SUnreclaim: 623688 kB' 'KernelStack: 22320 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 8214824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216836 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.734 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.735 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:47.736 19:03:33 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:47.736 19:03:33 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:47.736 19:03:33 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:47.736 19:03:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:47.736 ************************************ 00:02:47.736 START TEST default_setup 00:02:47.736 ************************************ 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.736 19:03:33 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:51.029 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:51.029 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:52.424 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43867584 kB' 'MemAvailable: 47766460 kB' 'Buffers: 2704 kB' 'Cached: 10326544 kB' 'SwapCached: 0 kB' 'Active: 7192188 kB' 'Inactive: 3676148 kB' 'Active(anon): 6802324 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542500 kB' 'Mapped: 189744 kB' 'Shmem: 6263236 kB' 'KReclaimable: 480356 kB' 'Slab: 1102700 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622344 kB' 'KernelStack: 22400 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8230132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216772 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.688 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.689 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43870652 kB' 'MemAvailable: 47769528 kB' 'Buffers: 2704 kB' 'Cached: 10326548 kB' 'SwapCached: 0 kB' 'Active: 7192532 kB' 'Inactive: 3676148 kB' 'Active(anon): 6802668 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542788 kB' 'Mapped: 189684 kB' 'Shmem: 6263240 kB' 'KReclaimable: 480356 kB' 'Slab: 1102700 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622344 kB' 'KernelStack: 22240 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8230152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216692 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.690 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.691 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43870208 kB' 'MemAvailable: 47769084 kB' 'Buffers: 2704 kB' 'Cached: 10326560 kB' 'SwapCached: 0 kB' 'Active: 7192280 kB' 'Inactive: 3676148 kB' 'Active(anon): 6802416 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542456 kB' 'Mapped: 189744 kB' 'Shmem: 6263252 kB' 'KReclaimable: 480356 kB' 'Slab: 1102768 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622412 kB' 'KernelStack: 22160 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8230172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.692 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.693 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:52.694 nr_hugepages=1024 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:52.694 resv_hugepages=0 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:52.694 surplus_hugepages=0 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:52.694 anon_hugepages=0 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.694 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43869804 kB' 'MemAvailable: 47768680 kB' 'Buffers: 2704 kB' 'Cached: 10326588 kB' 'SwapCached: 0 kB' 'Active: 7192780 kB' 'Inactive: 3676148 kB' 'Active(anon): 6802916 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542932 kB' 'Mapped: 189744 kB' 'Shmem: 6263280 kB' 'KReclaimable: 480356 kB' 'Slab: 1102768 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622412 kB' 'KernelStack: 22256 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8230196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216756 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.695 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.696 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27751124 kB' 'MemUsed: 4840960 kB' 'SwapCached: 0 kB' 'Active: 1423372 kB' 'Inactive: 274308 kB' 'Active(anon): 1263520 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1587384 kB' 'Mapped: 83920 kB' 'AnonPages: 113468 kB' 'Shmem: 1153224 kB' 'KernelStack: 12056 kB' 'PageTables: 3364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150368 kB' 'Slab: 401080 kB' 'SReclaimable: 150368 kB' 'SUnreclaim: 250712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.697 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:52.698 node0=1024 expecting 1024 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:52.698 00:02:52.698 real 0m5.356s 00:02:52.698 user 0m1.395s 00:02:52.698 sys 0m2.433s 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:52.698 19:03:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:52.698 ************************************ 00:02:52.698 END TEST default_setup 00:02:52.698 ************************************ 00:02:52.958 19:03:38 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:52.958 19:03:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:52.958 19:03:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:52.958 19:03:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:52.958 ************************************ 00:02:52.958 START TEST per_node_1G_alloc 00:02:52.958 ************************************ 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.958 19:03:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.958 19:03:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.958 19:03:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:52.958 19:03:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:52.958 19:03:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:52.958 19:03:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:52.958 19:03:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:52.958 19:03:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:52.958 19:03:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:52.958 19:03:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:52.958 19:03:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:52.958 19:03:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.958 19:03:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:56.255 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.255 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43903124 kB' 'MemAvailable: 47802000 kB' 'Buffers: 2704 kB' 'Cached: 10326692 kB' 'SwapCached: 0 kB' 'Active: 7191072 kB' 'Inactive: 3676148 kB' 'Active(anon): 6801208 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540596 kB' 'Mapped: 188712 kB' 'Shmem: 6263384 kB' 'KReclaimable: 480356 kB' 'Slab: 1102028 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 621672 kB' 'KernelStack: 22208 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8219384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216788 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:56.255 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.256 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43903408 kB' 'MemAvailable: 47802284 kB' 'Buffers: 2704 kB' 'Cached: 10326696 kB' 'SwapCached: 0 kB' 'Active: 7190548 kB' 'Inactive: 3676148 kB' 'Active(anon): 6800684 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540576 kB' 'Mapped: 188628 kB' 'Shmem: 6263388 kB' 'KReclaimable: 480356 kB' 'Slab: 1101944 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 621588 kB' 'KernelStack: 22192 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8217912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216740 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.257 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.258 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43906296 kB' 'MemAvailable: 47805172 kB' 'Buffers: 2704 kB' 'Cached: 10326712 kB' 'SwapCached: 0 kB' 'Active: 7190624 kB' 'Inactive: 3676148 kB' 'Active(anon): 6800760 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540580 kB' 'Mapped: 188628 kB' 'Shmem: 6263404 kB' 'KReclaimable: 480356 kB' 'Slab: 1101944 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 621588 kB' 'KernelStack: 22176 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8219424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216772 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.259 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.260 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:56.261 nr_hugepages=1024 00:02:56.261 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.261 resv_hugepages=0 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.262 surplus_hugepages=0 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.262 anon_hugepages=0 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43906468 kB' 'MemAvailable: 47805344 kB' 'Buffers: 2704 kB' 'Cached: 10326736 kB' 'SwapCached: 0 kB' 'Active: 7190512 kB' 'Inactive: 3676148 kB' 'Active(anon): 6800648 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540384 kB' 'Mapped: 188628 kB' 'Shmem: 6263428 kB' 'KReclaimable: 480356 kB' 'Slab: 1101944 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 621588 kB' 'KernelStack: 22272 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8219448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216836 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.262 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.263 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 28819548 kB' 'MemUsed: 3772536 kB' 'SwapCached: 0 kB' 'Active: 1420520 kB' 'Inactive: 274308 kB' 'Active(anon): 1260668 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1587436 kB' 'Mapped: 83068 kB' 'AnonPages: 110560 kB' 'Shmem: 1153276 kB' 'KernelStack: 11912 kB' 'PageTables: 3276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150368 kB' 'Slab: 400596 kB' 'SReclaimable: 150368 kB' 'SUnreclaim: 250228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.264 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.265 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 15087276 kB' 'MemUsed: 12615832 kB' 'SwapCached: 0 kB' 'Active: 5770116 kB' 'Inactive: 3401840 kB' 'Active(anon): 5540104 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3401840 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8742004 kB' 'Mapped: 105560 kB' 'AnonPages: 429948 kB' 'Shmem: 5110152 kB' 'KernelStack: 10376 kB' 'PageTables: 5616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 329988 kB' 'Slab: 701348 kB' 'SReclaimable: 329988 kB' 'SUnreclaim: 371360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.266 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:56.267 node0=512 expecting 512 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:56.267 node1=512 expecting 512 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:56.267 00:02:56.267 real 0m3.265s 00:02:56.267 user 0m1.130s 00:02:56.267 sys 0m2.143s 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:56.267 19:03:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:56.267 ************************************ 00:02:56.267 END TEST per_node_1G_alloc 00:02:56.267 ************************************ 00:02:56.267 19:03:42 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:56.267 19:03:42 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:56.267 19:03:42 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:56.267 19:03:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:56.267 ************************************ 00:02:56.267 START TEST even_2G_alloc 00:02:56.267 ************************************ 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.267 19:03:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:58.817 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.817 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.818 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.818 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43906080 kB' 'MemAvailable: 47804956 kB' 'Buffers: 2704 kB' 'Cached: 10326844 kB' 'SwapCached: 0 kB' 'Active: 7191384 kB' 'Inactive: 3676148 kB' 'Active(anon): 6801520 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541112 kB' 'Mapped: 188672 kB' 'Shmem: 6263536 kB' 'KReclaimable: 480356 kB' 'Slab: 1101944 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 621588 kB' 'KernelStack: 22192 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8219720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216804 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.084 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.085 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43906432 kB' 'MemAvailable: 47805308 kB' 'Buffers: 2704 kB' 'Cached: 10326848 kB' 'SwapCached: 0 kB' 'Active: 7191560 kB' 'Inactive: 3676148 kB' 'Active(anon): 6801696 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541280 kB' 'Mapped: 188644 kB' 'Shmem: 6263540 kB' 'KReclaimable: 480356 kB' 'Slab: 1101952 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 621596 kB' 'KernelStack: 22496 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8219712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216820 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.086 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.087 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43911092 kB' 'MemAvailable: 47809968 kB' 'Buffers: 2704 kB' 'Cached: 10326876 kB' 'SwapCached: 0 kB' 'Active: 7191076 kB' 'Inactive: 3676148 kB' 'Active(anon): 6801212 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540792 kB' 'Mapped: 188640 kB' 'Shmem: 6263568 kB' 'KReclaimable: 480356 kB' 'Slab: 1101888 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 621532 kB' 'KernelStack: 22288 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8218772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216756 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.088 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.089 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:59.090 nr_hugepages=1024 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.090 resv_hugepages=0 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.090 surplus_hugepages=0 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.090 anon_hugepages=0 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43910744 kB' 'MemAvailable: 47809620 kB' 'Buffers: 2704 kB' 'Cached: 10326900 kB' 'SwapCached: 0 kB' 'Active: 7191124 kB' 'Inactive: 3676148 kB' 'Active(anon): 6801260 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540816 kB' 'Mapped: 188640 kB' 'Shmem: 6263592 kB' 'KReclaimable: 480356 kB' 'Slab: 1101920 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 621564 kB' 'KernelStack: 22320 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8220288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216788 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.090 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.091 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.092 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 28812216 kB' 'MemUsed: 3779868 kB' 'SwapCached: 0 kB' 'Active: 1420456 kB' 'Inactive: 274308 kB' 'Active(anon): 1260604 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1587576 kB' 'Mapped: 83080 kB' 'AnonPages: 110268 kB' 'Shmem: 1153416 kB' 'KernelStack: 11960 kB' 'PageTables: 3268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150368 kB' 'Slab: 400648 kB' 'SReclaimable: 150368 kB' 'SUnreclaim: 250280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.093 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 15096248 kB' 'MemUsed: 12606860 kB' 'SwapCached: 0 kB' 'Active: 5771404 kB' 'Inactive: 3401840 kB' 'Active(anon): 5541392 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3401840 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8742044 kB' 'Mapped: 105560 kB' 'AnonPages: 431288 kB' 'Shmem: 5110192 kB' 'KernelStack: 10312 kB' 'PageTables: 5728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 329988 kB' 'Slab: 701272 kB' 'SReclaimable: 329988 kB' 'SUnreclaim: 371284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.094 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.095 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:59.096 node0=512 expecting 512 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:59.096 node1=512 expecting 512 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:59.096 00:02:59.096 real 0m2.958s 00:02:59.096 user 0m1.045s 00:02:59.096 sys 0m1.879s 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:59.096 19:03:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:59.096 ************************************ 00:02:59.096 END TEST even_2G_alloc 00:02:59.096 ************************************ 00:02:59.356 19:03:45 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:59.356 19:03:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:59.356 19:03:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:59.356 19:03:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:59.356 ************************************ 00:02:59.356 START TEST odd_alloc 00:02:59.356 ************************************ 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.356 19:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.654 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:02.654 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43903308 kB' 'MemAvailable: 47802184 kB' 'Buffers: 2704 kB' 'Cached: 10327000 kB' 'SwapCached: 0 kB' 'Active: 7193140 kB' 'Inactive: 3676148 kB' 'Active(anon): 6803276 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542232 kB' 'Mapped: 188752 kB' 'Shmem: 6263692 kB' 'KReclaimable: 480356 kB' 'Slab: 1102768 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622412 kB' 'KernelStack: 22448 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 8220384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216900 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.654 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.655 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43905728 kB' 'MemAvailable: 47804604 kB' 'Buffers: 2704 kB' 'Cached: 10327004 kB' 'SwapCached: 0 kB' 'Active: 7191892 kB' 'Inactive: 3676148 kB' 'Active(anon): 6802028 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541528 kB' 'Mapped: 188656 kB' 'Shmem: 6263696 kB' 'KReclaimable: 480356 kB' 'Slab: 1102792 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622436 kB' 'KernelStack: 22176 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 8217788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.656 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.657 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43906624 kB' 'MemAvailable: 47805500 kB' 'Buffers: 2704 kB' 'Cached: 10327020 kB' 'SwapCached: 0 kB' 'Active: 7191404 kB' 'Inactive: 3676148 kB' 'Active(anon): 6801540 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541032 kB' 'Mapped: 188656 kB' 'Shmem: 6263712 kB' 'KReclaimable: 480356 kB' 'Slab: 1102792 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622436 kB' 'KernelStack: 22128 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 8217812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.658 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.659 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:02.660 nr_hugepages=1025 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:02.660 resv_hugepages=0 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:02.660 surplus_hugepages=0 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:02.660 anon_hugepages=0 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43907008 kB' 'MemAvailable: 47805884 kB' 'Buffers: 2704 kB' 'Cached: 10327040 kB' 'SwapCached: 0 kB' 'Active: 7191436 kB' 'Inactive: 3676148 kB' 'Active(anon): 6801572 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541032 kB' 'Mapped: 188656 kB' 'Shmem: 6263732 kB' 'KReclaimable: 480356 kB' 'Slab: 1102792 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622436 kB' 'KernelStack: 22128 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 8218220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.660 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.661 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 28816764 kB' 'MemUsed: 3775320 kB' 'SwapCached: 0 kB' 'Active: 1420436 kB' 'Inactive: 274308 kB' 'Active(anon): 1260584 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1587700 kB' 'Mapped: 83092 kB' 'AnonPages: 110156 kB' 'Shmem: 1153540 kB' 'KernelStack: 11960 kB' 'PageTables: 3216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150368 kB' 'Slab: 401572 kB' 'SReclaimable: 150368 kB' 'SUnreclaim: 251204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.662 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 15092564 kB' 'MemUsed: 12610544 kB' 'SwapCached: 0 kB' 'Active: 5771416 kB' 'Inactive: 3401840 kB' 'Active(anon): 5541404 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3401840 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8742084 kB' 'Mapped: 105564 kB' 'AnonPages: 431288 kB' 'Shmem: 5110232 kB' 'KernelStack: 10216 kB' 'PageTables: 5216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 329988 kB' 'Slab: 701204 kB' 'SReclaimable: 329988 kB' 'SUnreclaim: 371216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.663 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.664 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:02.665 node0=512 expecting 513 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:02.665 node1=513 expecting 512 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:02.665 00:03:02.665 real 0m3.279s 00:03:02.665 user 0m1.170s 00:03:02.665 sys 0m2.128s 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:02.665 19:03:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:02.665 ************************************ 00:03:02.665 END TEST odd_alloc 00:03:02.665 ************************************ 00:03:02.665 19:03:48 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:02.665 19:03:48 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:02.665 19:03:48 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:02.665 19:03:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:02.665 ************************************ 00:03:02.665 START TEST custom_alloc 00:03:02.665 ************************************ 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:02.665 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:02.666 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:02.666 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:02.666 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:02.666 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:02.666 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:02.666 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:02.666 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:02.666 19:03:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:02.666 19:03:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.666 19:03:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.964 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.964 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42862976 kB' 'MemAvailable: 46761852 kB' 'Buffers: 2704 kB' 'Cached: 10327172 kB' 'SwapCached: 0 kB' 'Active: 7192364 kB' 'Inactive: 3676148 kB' 'Active(anon): 6802500 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541840 kB' 'Mapped: 188776 kB' 'Shmem: 6263864 kB' 'KReclaimable: 480356 kB' 'Slab: 1102172 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 621816 kB' 'KernelStack: 22160 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 8218952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216756 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.964 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.965 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42863976 kB' 'MemAvailable: 46762852 kB' 'Buffers: 2704 kB' 'Cached: 10327176 kB' 'SwapCached: 0 kB' 'Active: 7192072 kB' 'Inactive: 3676148 kB' 'Active(anon): 6802208 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541552 kB' 'Mapped: 188744 kB' 'Shmem: 6263868 kB' 'KReclaimable: 480356 kB' 'Slab: 1102152 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 621796 kB' 'KernelStack: 22160 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 8218972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.966 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.967 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42865428 kB' 'MemAvailable: 46764304 kB' 'Buffers: 2704 kB' 'Cached: 10327176 kB' 'SwapCached: 0 kB' 'Active: 7191924 kB' 'Inactive: 3676148 kB' 'Active(anon): 6802060 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541396 kB' 'Mapped: 188668 kB' 'Shmem: 6263868 kB' 'KReclaimable: 480356 kB' 'Slab: 1102168 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 621812 kB' 'KernelStack: 22176 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 8218992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.968 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.969 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:05.970 nr_hugepages=1536 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.970 resv_hugepages=0 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.970 surplus_hugepages=0 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.970 anon_hugepages=0 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 42866436 kB' 'MemAvailable: 46765312 kB' 'Buffers: 2704 kB' 'Cached: 10327232 kB' 'SwapCached: 0 kB' 'Active: 7191292 kB' 'Inactive: 3676148 kB' 'Active(anon): 6801428 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540668 kB' 'Mapped: 188668 kB' 'Shmem: 6263924 kB' 'KReclaimable: 480356 kB' 'Slab: 1102168 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 621812 kB' 'KernelStack: 22144 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 8219012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.970 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.971 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 28828420 kB' 'MemUsed: 3763664 kB' 'SwapCached: 0 kB' 'Active: 1419580 kB' 'Inactive: 274308 kB' 'Active(anon): 1259728 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1587748 kB' 'Mapped: 83104 kB' 'AnonPages: 109212 kB' 'Shmem: 1153588 kB' 'KernelStack: 11928 kB' 'PageTables: 3172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150368 kB' 'Slab: 400548 kB' 'SReclaimable: 150368 kB' 'SUnreclaim: 250180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.972 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.973 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14037336 kB' 'MemUsed: 13665772 kB' 'SwapCached: 0 kB' 'Active: 5771796 kB' 'Inactive: 3401840 kB' 'Active(anon): 5541784 kB' 'Inactive(anon): 0 kB' 'Active(file): 230012 kB' 'Inactive(file): 3401840 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8742208 kB' 'Mapped: 105552 kB' 'AnonPages: 431552 kB' 'Shmem: 5110356 kB' 'KernelStack: 10184 kB' 'PageTables: 5092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 329988 kB' 'Slab: 701620 kB' 'SReclaimable: 329988 kB' 'SUnreclaim: 371632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.974 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:05.975 node0=512 expecting 512 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:05.975 node1=1024 expecting 1024 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:05.975 00:03:05.975 real 0m3.242s 00:03:05.975 user 0m1.165s 00:03:05.975 sys 0m2.098s 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:05.975 19:03:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:05.975 ************************************ 00:03:05.975 END TEST custom_alloc 00:03:05.975 ************************************ 00:03:05.975 19:03:52 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:05.975 19:03:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:05.975 19:03:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:05.975 19:03:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:05.975 ************************************ 00:03:05.975 START TEST no_shrink_alloc 00:03:05.975 ************************************ 00:03:05.975 19:03:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:05.975 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:05.975 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:05.975 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:05.975 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:05.975 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:05.975 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:05.975 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.975 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:05.975 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.976 19:03:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.271 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.271 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43867568 kB' 'MemAvailable: 47766444 kB' 'Buffers: 2704 kB' 'Cached: 10327332 kB' 'SwapCached: 0 kB' 'Active: 7192652 kB' 'Inactive: 3676148 kB' 'Active(anon): 6802788 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541900 kB' 'Mapped: 188688 kB' 'Shmem: 6264024 kB' 'KReclaimable: 480356 kB' 'Slab: 1102572 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622216 kB' 'KernelStack: 22144 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8219468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216788 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.271 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43867228 kB' 'MemAvailable: 47766104 kB' 'Buffers: 2704 kB' 'Cached: 10327332 kB' 'SwapCached: 0 kB' 'Active: 7192376 kB' 'Inactive: 3676148 kB' 'Active(anon): 6802512 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541604 kB' 'Mapped: 188664 kB' 'Shmem: 6264024 kB' 'KReclaimable: 480356 kB' 'Slab: 1102572 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622216 kB' 'KernelStack: 22144 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8219484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216772 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.272 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43866780 kB' 'MemAvailable: 47765656 kB' 'Buffers: 2704 kB' 'Cached: 10327332 kB' 'SwapCached: 0 kB' 'Active: 7193264 kB' 'Inactive: 3676148 kB' 'Active(anon): 6803400 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542524 kB' 'Mapped: 188664 kB' 'Shmem: 6264024 kB' 'KReclaimable: 480356 kB' 'Slab: 1102572 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622216 kB' 'KernelStack: 22128 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8264816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216756 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.273 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:09.274 nr_hugepages=1024 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.274 resv_hugepages=0 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.274 surplus_hugepages=0 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.274 anon_hugepages=0 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43868744 kB' 'MemAvailable: 47767620 kB' 'Buffers: 2704 kB' 'Cached: 10327336 kB' 'SwapCached: 0 kB' 'Active: 7192780 kB' 'Inactive: 3676148 kB' 'Active(anon): 6802916 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542084 kB' 'Mapped: 188664 kB' 'Shmem: 6264028 kB' 'KReclaimable: 480356 kB' 'Slab: 1102564 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622208 kB' 'KernelStack: 22144 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8219160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.274 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27775256 kB' 'MemUsed: 4816828 kB' 'SwapCached: 0 kB' 'Active: 1421348 kB' 'Inactive: 274308 kB' 'Active(anon): 1261496 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1587856 kB' 'Mapped: 83116 kB' 'AnonPages: 110948 kB' 'Shmem: 1153696 kB' 'KernelStack: 11896 kB' 'PageTables: 3176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150368 kB' 'Slab: 400612 kB' 'SReclaimable: 150368 kB' 'SUnreclaim: 250244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.275 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:09.276 node0=1024 expecting 1024 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.276 19:03:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.570 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.570 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.570 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43873640 kB' 'MemAvailable: 47772516 kB' 'Buffers: 2704 kB' 'Cached: 10327476 kB' 'SwapCached: 0 kB' 'Active: 7193616 kB' 'Inactive: 3676148 kB' 'Active(anon): 6803752 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542864 kB' 'Mapped: 188688 kB' 'Shmem: 6264168 kB' 'KReclaimable: 480356 kB' 'Slab: 1102728 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622372 kB' 'KernelStack: 22144 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8220256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.571 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.572 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43874068 kB' 'MemAvailable: 47772944 kB' 'Buffers: 2704 kB' 'Cached: 10327476 kB' 'SwapCached: 0 kB' 'Active: 7194572 kB' 'Inactive: 3676148 kB' 'Active(anon): 6804708 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543748 kB' 'Mapped: 189176 kB' 'Shmem: 6264168 kB' 'KReclaimable: 480356 kB' 'Slab: 1102728 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622372 kB' 'KernelStack: 22144 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8222420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216692 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.573 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.574 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.575 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43871240 kB' 'MemAvailable: 47770116 kB' 'Buffers: 2704 kB' 'Cached: 10327476 kB' 'SwapCached: 0 kB' 'Active: 7198000 kB' 'Inactive: 3676148 kB' 'Active(anon): 6808136 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547700 kB' 'Mapped: 189176 kB' 'Shmem: 6264168 kB' 'KReclaimable: 480356 kB' 'Slab: 1102736 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622380 kB' 'KernelStack: 22128 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8226416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216712 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.576 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.577 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.578 nr_hugepages=1024 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.578 resv_hugepages=0 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.578 surplus_hugepages=0 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.578 anon_hugepages=0 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 43872564 kB' 'MemAvailable: 47771440 kB' 'Buffers: 2704 kB' 'Cached: 10327508 kB' 'SwapCached: 0 kB' 'Active: 7192612 kB' 'Inactive: 3676148 kB' 'Active(anon): 6802748 kB' 'Inactive(anon): 0 kB' 'Active(file): 389864 kB' 'Inactive(file): 3676148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541768 kB' 'Mapped: 188672 kB' 'Shmem: 6264200 kB' 'KReclaimable: 480356 kB' 'Slab: 1102736 kB' 'SReclaimable: 480356 kB' 'SUnreclaim: 622380 kB' 'KernelStack: 22112 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 8220316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216692 kB' 'VmallocChunk: 0 kB' 'Percpu: 94976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3079540 kB' 'DirectMap2M: 14432256 kB' 'DirectMap1G: 51380224 kB' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.578 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.579 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 27792424 kB' 'MemUsed: 4799660 kB' 'SwapCached: 0 kB' 'Active: 1420480 kB' 'Inactive: 274308 kB' 'Active(anon): 1260628 kB' 'Inactive(anon): 0 kB' 'Active(file): 159852 kB' 'Inactive(file): 274308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1587944 kB' 'Mapped: 83120 kB' 'AnonPages: 110084 kB' 'Shmem: 1153784 kB' 'KernelStack: 11880 kB' 'PageTables: 3060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150368 kB' 'Slab: 400924 kB' 'SReclaimable: 150368 kB' 'SUnreclaim: 250556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.580 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.581 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.840 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:12.841 node0=1024 expecting 1024 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:12.841 00:03:12.841 real 0m6.770s 00:03:12.841 user 0m2.585s 00:03:12.841 sys 0m4.305s 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:12.841 19:03:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:12.841 ************************************ 00:03:12.841 END TEST no_shrink_alloc 00:03:12.841 ************************************ 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:12.841 19:03:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:12.841 00:03:12.841 real 0m25.517s 00:03:12.841 user 0m8.743s 00:03:12.841 sys 0m15.419s 00:03:12.841 19:03:58 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:12.841 19:03:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.841 ************************************ 00:03:12.841 END TEST hugepages 00:03:12.841 ************************************ 00:03:12.841 19:03:58 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:12.841 19:03:58 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.841 19:03:58 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.841 19:03:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:12.841 ************************************ 00:03:12.841 START TEST driver 00:03:12.841 ************************************ 00:03:12.841 19:03:58 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:12.841 * Looking for test storage... 00:03:12.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:12.841 19:03:59 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:12.841 19:03:59 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.841 19:03:59 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.116 19:04:03 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:18.116 19:04:03 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:18.116 19:04:03 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:18.116 19:04:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:18.116 ************************************ 00:03:18.116 START TEST guess_driver 00:03:18.116 ************************************ 00:03:18.116 19:04:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:18.117 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:18.117 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:18.117 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:18.117 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:18.117 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:18.117 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:18.117 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:18.117 Looking for driver=vfio-pci 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.117 19:04:03 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.654 19:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.561 19:04:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.561 19:04:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.561 19:04:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.561 19:04:08 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:22.561 19:04:08 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:22.561 19:04:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.561 19:04:08 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.839 00:03:27.839 real 0m9.664s 00:03:27.839 user 0m2.572s 00:03:27.839 sys 0m4.763s 00:03:27.839 19:04:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.839 19:04:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:27.839 ************************************ 00:03:27.839 END TEST guess_driver 00:03:27.839 ************************************ 00:03:27.839 00:03:27.839 real 0m14.380s 00:03:27.839 user 0m3.798s 00:03:27.839 sys 0m7.403s 00:03:27.839 19:04:13 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.839 19:04:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:27.839 ************************************ 00:03:27.839 END TEST driver 00:03:27.839 ************************************ 00:03:27.839 19:04:13 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:27.839 19:04:13 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.839 19:04:13 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.839 19:04:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:27.839 ************************************ 00:03:27.839 START TEST devices 00:03:27.839 ************************************ 00:03:27.839 19:04:13 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:27.839 * Looking for test storage... 00:03:27.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:27.839 19:04:13 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:27.839 19:04:13 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:27.839 19:04:13 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.839 19:04:13 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.132 19:04:17 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:31.132 19:04:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:31.132 19:04:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:31.132 19:04:17 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:31.132 19:04:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:31.132 19:04:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:31.132 19:04:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:31.132 19:04:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:31.132 19:04:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:31.132 19:04:17 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:31.132 19:04:17 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:31.132 19:04:17 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:31.132 19:04:17 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:31.132 19:04:17 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:31.132 19:04:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:31.132 19:04:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:31.133 19:04:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:31.133 19:04:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:31.133 19:04:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:31.133 19:04:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:31.133 19:04:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:31.133 19:04:17 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:31.392 No valid GPT data, bailing 00:03:31.392 19:04:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:31.392 19:04:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:31.392 19:04:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:31.392 19:04:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:31.392 19:04:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:31.392 19:04:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:31.392 19:04:17 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:31.392 19:04:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:31.392 19:04:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:31.392 19:04:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:31.392 19:04:17 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:31.392 19:04:17 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:31.392 19:04:17 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:31.392 19:04:17 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.392 19:04:17 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.392 19:04:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:31.392 ************************************ 00:03:31.392 START TEST nvme_mount 00:03:31.392 ************************************ 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:31.392 19:04:17 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:32.332 Creating new GPT entries in memory. 00:03:32.332 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:32.332 other utilities. 00:03:32.332 19:04:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:32.332 19:04:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.332 19:04:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:32.332 19:04:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:32.332 19:04:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:33.269 Creating new GPT entries in memory. 00:03:33.269 The operation has completed successfully. 00:03:33.269 19:04:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:33.269 19:04:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.269 19:04:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1318286 00:03:33.528 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.528 19:04:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:33.528 19:04:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.528 19:04:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:33.528 19:04:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:33.528 19:04:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.529 19:04:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.066 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:36.326 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:36.326 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:36.586 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:36.586 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:36.586 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:36.586 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:36.586 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:36.586 19:04:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:36.586 19:04:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.586 19:04:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:36.586 19:04:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:36.586 19:04:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.845 19:04:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:40.152 19:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.152 19:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.453 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.453 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.453 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.453 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.453 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.453 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.453 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.453 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.453 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.453 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:43.454 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:43.454 00:03:43.454 real 0m12.035s 00:03:43.454 user 0m3.434s 00:03:43.454 sys 0m6.436s 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.454 19:04:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:43.454 ************************************ 00:03:43.454 END TEST nvme_mount 00:03:43.454 ************************************ 00:03:43.454 19:04:29 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:43.454 19:04:29 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.454 19:04:29 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.454 19:04:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:43.454 ************************************ 00:03:43.454 START TEST dm_mount 00:03:43.454 ************************************ 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:43.454 19:04:29 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:44.392 Creating new GPT entries in memory. 00:03:44.392 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:44.392 other utilities. 00:03:44.392 19:04:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:44.392 19:04:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:44.392 19:04:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:44.392 19:04:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:44.392 19:04:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:45.768 Creating new GPT entries in memory. 00:03:45.768 The operation has completed successfully. 00:03:45.768 19:04:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:45.768 19:04:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.768 19:04:31 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.768 19:04:31 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.768 19:04:31 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:46.703 The operation has completed successfully. 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1322689 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.703 19:04:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.998 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:49.999 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:49.999 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:49.999 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:49.999 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:49.999 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.999 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:49.999 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.999 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:49.999 19:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.999 19:04:35 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.999 19:04:35 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:52.533 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:52.792 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:52.792 00:03:52.792 real 0m9.345s 00:03:52.792 user 0m2.221s 00:03:52.792 sys 0m4.167s 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.792 19:04:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:52.792 ************************************ 00:03:52.792 END TEST dm_mount 00:03:52.792 ************************************ 00:03:52.792 19:04:38 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:52.792 19:04:38 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:52.792 19:04:38 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.792 19:04:38 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.792 19:04:38 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:52.792 19:04:38 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.792 19:04:38 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:53.052 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:53.052 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:53.052 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:53.052 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:53.052 19:04:39 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:53.052 19:04:39 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.052 19:04:39 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:53.052 19:04:39 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.052 19:04:39 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:53.052 19:04:39 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:53.052 19:04:39 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:53.052 00:03:53.052 real 0m25.818s 00:03:53.052 user 0m7.187s 00:03:53.052 sys 0m13.437s 00:03:53.052 19:04:39 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.052 19:04:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:53.052 ************************************ 00:03:53.052 END TEST devices 00:03:53.052 ************************************ 00:03:53.052 00:03:53.052 real 1m28.321s 00:03:53.052 user 0m26.346s 00:03:53.052 sys 0m49.961s 00:03:53.052 19:04:39 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.052 19:04:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.052 ************************************ 00:03:53.052 END TEST setup.sh 00:03:53.052 ************************************ 00:03:53.311 19:04:39 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:56.600 Hugepages 00:03:56.600 node hugesize free / total 00:03:56.600 node0 1048576kB 0 / 0 00:03:56.600 node0 2048kB 2048 / 2048 00:03:56.600 node1 1048576kB 0 / 0 00:03:56.600 node1 2048kB 0 / 0 00:03:56.600 00:03:56.600 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:56.600 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:56.600 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:56.600 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:56.600 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:56.600 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:56.600 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:56.600 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:56.600 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:56.600 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:56.600 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:56.600 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:56.600 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:56.600 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:56.600 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:56.600 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:56.600 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:56.600 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:56.600 19:04:42 -- spdk/autotest.sh@130 -- # uname -s 00:03:56.600 19:04:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:56.600 19:04:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:56.600 19:04:42 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.892 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:59.892 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.328 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.328 19:04:47 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:02.265 19:04:48 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:02.265 19:04:48 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:02.265 19:04:48 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:02.265 19:04:48 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:02.265 19:04:48 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:02.265 19:04:48 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:02.265 19:04:48 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.265 19:04:48 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:02.265 19:04:48 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:02.524 19:04:48 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:02.524 19:04:48 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:02.525 19:04:48 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.814 Waiting for block devices as requested 00:04:05.814 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:05.814 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:05.814 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:06.072 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:06.072 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:06.072 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:06.331 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:06.331 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:06.331 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:06.591 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:06.591 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:06.591 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:06.850 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:06.850 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:06.850 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:07.109 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:07.109 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:07.367 19:04:53 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:07.367 19:04:53 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:07.367 19:04:53 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:04:07.367 19:04:53 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:07.367 19:04:53 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:07.367 19:04:53 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:07.367 19:04:53 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:07.367 19:04:53 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:07.367 19:04:53 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:07.367 19:04:53 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:07.367 19:04:53 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:07.367 19:04:53 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:07.367 19:04:53 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:07.367 19:04:53 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:07.367 19:04:53 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:07.367 19:04:53 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:07.367 19:04:53 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:07.367 19:04:53 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:07.367 19:04:53 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:07.367 19:04:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:07.367 19:04:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:07.367 19:04:53 -- common/autotest_common.sh@1557 -- # continue 00:04:07.367 19:04:53 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:07.367 19:04:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:07.367 19:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:07.367 19:04:53 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:07.367 19:04:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.367 19:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:07.367 19:04:53 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.656 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:10.656 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.035 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:12.295 19:04:58 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:12.295 19:04:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:12.296 19:04:58 -- common/autotest_common.sh@10 -- # set +x 00:04:12.296 19:04:58 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:12.296 19:04:58 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:12.296 19:04:58 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:12.296 19:04:58 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:12.296 19:04:58 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:12.296 19:04:58 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:12.296 19:04:58 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:12.296 19:04:58 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:12.296 19:04:58 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.296 19:04:58 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:12.296 19:04:58 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:12.296 19:04:58 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:12.296 19:04:58 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:12.296 19:04:58 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:12.296 19:04:58 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:12.296 19:04:58 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:12.296 19:04:58 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:12.296 19:04:58 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:12.296 19:04:58 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:04:12.296 19:04:58 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:04:12.296 19:04:58 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1332134 00:04:12.296 19:04:58 -- common/autotest_common.sh@1598 -- # waitforlisten 1332134 00:04:12.296 19:04:58 -- common/autotest_common.sh@831 -- # '[' -z 1332134 ']' 00:04:12.296 19:04:58 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.296 19:04:58 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:12.296 19:04:58 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.296 19:04:58 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:12.296 19:04:58 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.296 19:04:58 -- common/autotest_common.sh@10 -- # set +x 00:04:12.555 [2024-07-24 19:04:58.539623] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:04:12.555 [2024-07-24 19:04:58.539673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332134 ] 00:04:12.555 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.555 [2024-07-24 19:04:58.610365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.555 [2024-07-24 19:04:58.684131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.123 19:04:59 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:13.123 19:04:59 -- common/autotest_common.sh@864 -- # return 0 00:04:13.123 19:04:59 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:13.123 19:04:59 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:13.123 19:04:59 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:16.411 nvme0n1 00:04:16.411 19:05:02 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:16.411 [2024-07-24 19:05:02.488923] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:16.411 request: 00:04:16.411 { 00:04:16.411 "nvme_ctrlr_name": "nvme0", 00:04:16.411 "password": "test", 00:04:16.411 "method": "bdev_nvme_opal_revert", 00:04:16.411 "req_id": 1 00:04:16.411 } 00:04:16.411 Got JSON-RPC error response 00:04:16.411 response: 00:04:16.411 { 00:04:16.411 "code": -32602, 00:04:16.411 "message": "Invalid parameters" 00:04:16.411 } 00:04:16.411 19:05:02 -- common/autotest_common.sh@1604 -- # true 00:04:16.411 19:05:02 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:16.411 19:05:02 -- common/autotest_common.sh@1608 -- # killprocess 1332134 00:04:16.411 19:05:02 -- common/autotest_common.sh@950 -- # '[' -z 1332134 ']' 00:04:16.411 19:05:02 -- common/autotest_common.sh@954 -- # kill -0 1332134 00:04:16.411 19:05:02 -- common/autotest_common.sh@955 -- # uname 00:04:16.411 19:05:02 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.411 19:05:02 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1332134 00:04:16.411 19:05:02 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.411 19:05:02 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.411 19:05:02 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1332134' 00:04:16.411 killing process with pid 1332134 00:04:16.411 19:05:02 -- common/autotest_common.sh@969 -- # kill 1332134 00:04:16.411 19:05:02 -- common/autotest_common.sh@974 -- # wait 1332134 00:04:18.945 19:05:04 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:18.945 19:05:04 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:18.945 19:05:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:18.945 19:05:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:18.945 19:05:04 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:18.945 19:05:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.945 19:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:18.945 19:05:04 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:18.945 19:05:04 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:18.945 19:05:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.945 19:05:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.945 19:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:18.945 ************************************ 00:04:18.945 START TEST env 00:04:18.945 ************************************ 00:04:18.945 19:05:04 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:18.945 * Looking for test storage... 00:04:18.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:18.945 19:05:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:18.945 19:05:04 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.945 19:05:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.945 19:05:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.945 ************************************ 00:04:18.945 START TEST env_memory 00:04:18.945 ************************************ 00:04:18.945 19:05:04 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:18.945 00:04:18.945 00:04:18.945 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.945 http://cunit.sourceforge.net/ 00:04:18.945 00:04:18.945 00:04:18.945 Suite: memory 00:04:18.945 Test: alloc and free memory map ...[2024-07-24 19:05:04.909151] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:18.945 passed 00:04:18.946 Test: mem map translation ...[2024-07-24 19:05:04.928408] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:18.946 [2024-07-24 19:05:04.928424] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:18.946 [2024-07-24 19:05:04.928462] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:18.946 [2024-07-24 19:05:04.928471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:18.946 passed 00:04:18.946 Test: mem map registration ...[2024-07-24 19:05:04.965370] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:18.946 [2024-07-24 19:05:04.965385] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:18.946 passed 00:04:18.946 Test: mem map adjacent registrations ...passed 00:04:18.946 00:04:18.946 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.946 suites 1 1 n/a 0 0 00:04:18.946 tests 4 4 4 0 0 00:04:18.946 asserts 152 152 152 0 n/a 00:04:18.946 00:04:18.946 Elapsed time = 0.137 seconds 00:04:18.946 00:04:18.946 real 0m0.151s 00:04:18.946 user 0m0.142s 00:04:18.946 sys 0m0.009s 00:04:18.946 19:05:05 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.946 19:05:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:18.946 ************************************ 00:04:18.946 END TEST env_memory 00:04:18.946 ************************************ 00:04:18.946 19:05:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:18.946 19:05:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.946 19:05:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.946 19:05:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.946 ************************************ 00:04:18.946 START TEST env_vtophys 00:04:18.946 ************************************ 00:04:18.946 19:05:05 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:18.946 EAL: lib.eal log level changed from notice to debug 00:04:18.946 EAL: Detected lcore 0 as core 0 on socket 0 00:04:18.946 EAL: Detected lcore 1 as core 1 on socket 0 00:04:18.946 EAL: Detected lcore 2 as core 2 on socket 0 00:04:18.946 EAL: Detected lcore 3 as core 3 on socket 0 00:04:18.946 EAL: Detected lcore 4 as core 4 on socket 0 00:04:18.946 EAL: Detected lcore 5 as core 5 on socket 0 00:04:18.946 EAL: Detected lcore 6 as core 6 on socket 0 00:04:18.946 EAL: Detected lcore 7 as core 8 on socket 0 00:04:18.946 EAL: Detected lcore 8 as core 9 on socket 0 00:04:18.946 EAL: Detected lcore 9 as core 10 on socket 0 00:04:18.946 EAL: Detected lcore 10 as core 11 on socket 0 00:04:18.946 EAL: Detected lcore 11 as core 12 on socket 0 00:04:18.946 EAL: Detected lcore 12 as core 13 on socket 0 00:04:18.946 EAL: Detected lcore 13 as core 14 on socket 0 00:04:18.946 EAL: Detected lcore 14 as core 16 on socket 0 00:04:18.946 EAL: Detected lcore 15 as core 17 on socket 0 00:04:18.946 EAL: Detected lcore 16 as core 18 on socket 0 00:04:18.946 EAL: Detected lcore 17 as core 19 on socket 0 00:04:18.946 EAL: Detected lcore 18 as core 20 on socket 0 00:04:18.946 EAL: Detected lcore 19 as core 21 on socket 0 00:04:18.946 EAL: Detected lcore 20 as core 22 on socket 0 00:04:18.946 EAL: Detected lcore 21 as core 24 on socket 0 00:04:18.946 EAL: Detected lcore 22 as core 25 on socket 0 00:04:18.946 EAL: Detected lcore 23 as core 26 on socket 0 00:04:18.946 EAL: Detected lcore 24 as core 27 on socket 0 00:04:18.946 EAL: Detected lcore 25 as core 28 on socket 0 00:04:18.946 EAL: Detected lcore 26 as core 29 on socket 0 00:04:18.946 EAL: Detected lcore 27 as core 30 on socket 0 00:04:18.946 EAL: Detected lcore 28 as core 0 on socket 1 00:04:18.946 EAL: Detected lcore 29 as core 1 on socket 1 00:04:18.946 EAL: Detected lcore 30 as core 2 on socket 1 00:04:18.946 EAL: Detected lcore 31 as core 3 on socket 1 00:04:18.946 EAL: Detected lcore 32 as core 4 on socket 1 00:04:18.946 EAL: Detected lcore 33 as core 5 on socket 1 00:04:18.946 EAL: Detected lcore 34 as core 6 on socket 1 00:04:18.946 EAL: Detected lcore 35 as core 8 on socket 1 00:04:18.946 EAL: Detected lcore 36 as core 9 on socket 1 00:04:18.946 EAL: Detected lcore 37 as core 10 on socket 1 00:04:18.946 EAL: Detected lcore 38 as core 11 on socket 1 00:04:18.946 EAL: Detected lcore 39 as core 12 on socket 1 00:04:18.946 EAL: Detected lcore 40 as core 13 on socket 1 00:04:18.946 EAL: Detected lcore 41 as core 14 on socket 1 00:04:18.946 EAL: Detected lcore 42 as core 16 on socket 1 00:04:18.946 EAL: Detected lcore 43 as core 17 on socket 1 00:04:18.946 EAL: Detected lcore 44 as core 18 on socket 1 00:04:18.946 EAL: Detected lcore 45 as core 19 on socket 1 00:04:18.946 EAL: Detected lcore 46 as core 20 on socket 1 00:04:18.946 EAL: Detected lcore 47 as core 21 on socket 1 00:04:18.946 EAL: Detected lcore 48 as core 22 on socket 1 00:04:18.946 EAL: Detected lcore 49 as core 24 on socket 1 00:04:18.946 EAL: Detected lcore 50 as core 25 on socket 1 00:04:18.946 EAL: Detected lcore 51 as core 26 on socket 1 00:04:18.946 EAL: Detected lcore 52 as core 27 on socket 1 00:04:18.946 EAL: Detected lcore 53 as core 28 on socket 1 00:04:18.946 EAL: Detected lcore 54 as core 29 on socket 1 00:04:18.946 EAL: Detected lcore 55 as core 30 on socket 1 00:04:18.946 EAL: Detected lcore 56 as core 0 on socket 0 00:04:18.946 EAL: Detected lcore 57 as core 1 on socket 0 00:04:18.946 EAL: Detected lcore 58 as core 2 on socket 0 00:04:18.946 EAL: Detected lcore 59 as core 3 on socket 0 00:04:18.946 EAL: Detected lcore 60 as core 4 on socket 0 00:04:18.946 EAL: Detected lcore 61 as core 5 on socket 0 00:04:18.946 EAL: Detected lcore 62 as core 6 on socket 0 00:04:18.946 EAL: Detected lcore 63 as core 8 on socket 0 00:04:18.946 EAL: Detected lcore 64 as core 9 on socket 0 00:04:18.946 EAL: Detected lcore 65 as core 10 on socket 0 00:04:18.946 EAL: Detected lcore 66 as core 11 on socket 0 00:04:18.946 EAL: Detected lcore 67 as core 12 on socket 0 00:04:18.946 EAL: Detected lcore 68 as core 13 on socket 0 00:04:18.946 EAL: Detected lcore 69 as core 14 on socket 0 00:04:18.946 EAL: Detected lcore 70 as core 16 on socket 0 00:04:18.946 EAL: Detected lcore 71 as core 17 on socket 0 00:04:18.946 EAL: Detected lcore 72 as core 18 on socket 0 00:04:18.946 EAL: Detected lcore 73 as core 19 on socket 0 00:04:18.946 EAL: Detected lcore 74 as core 20 on socket 0 00:04:18.946 EAL: Detected lcore 75 as core 21 on socket 0 00:04:18.946 EAL: Detected lcore 76 as core 22 on socket 0 00:04:18.946 EAL: Detected lcore 77 as core 24 on socket 0 00:04:18.946 EAL: Detected lcore 78 as core 25 on socket 0 00:04:18.946 EAL: Detected lcore 79 as core 26 on socket 0 00:04:18.946 EAL: Detected lcore 80 as core 27 on socket 0 00:04:18.946 EAL: Detected lcore 81 as core 28 on socket 0 00:04:18.946 EAL: Detected lcore 82 as core 29 on socket 0 00:04:18.946 EAL: Detected lcore 83 as core 30 on socket 0 00:04:18.946 EAL: Detected lcore 84 as core 0 on socket 1 00:04:18.946 EAL: Detected lcore 85 as core 1 on socket 1 00:04:18.946 EAL: Detected lcore 86 as core 2 on socket 1 00:04:18.946 EAL: Detected lcore 87 as core 3 on socket 1 00:04:18.946 EAL: Detected lcore 88 as core 4 on socket 1 00:04:18.946 EAL: Detected lcore 89 as core 5 on socket 1 00:04:18.946 EAL: Detected lcore 90 as core 6 on socket 1 00:04:18.946 EAL: Detected lcore 91 as core 8 on socket 1 00:04:18.946 EAL: Detected lcore 92 as core 9 on socket 1 00:04:18.946 EAL: Detected lcore 93 as core 10 on socket 1 00:04:18.946 EAL: Detected lcore 94 as core 11 on socket 1 00:04:18.946 EAL: Detected lcore 95 as core 12 on socket 1 00:04:18.946 EAL: Detected lcore 96 as core 13 on socket 1 00:04:18.946 EAL: Detected lcore 97 as core 14 on socket 1 00:04:18.946 EAL: Detected lcore 98 as core 16 on socket 1 00:04:18.946 EAL: Detected lcore 99 as core 17 on socket 1 00:04:18.946 EAL: Detected lcore 100 as core 18 on socket 1 00:04:18.946 EAL: Detected lcore 101 as core 19 on socket 1 00:04:18.946 EAL: Detected lcore 102 as core 20 on socket 1 00:04:18.946 EAL: Detected lcore 103 as core 21 on socket 1 00:04:18.946 EAL: Detected lcore 104 as core 22 on socket 1 00:04:18.946 EAL: Detected lcore 105 as core 24 on socket 1 00:04:18.946 EAL: Detected lcore 106 as core 25 on socket 1 00:04:18.946 EAL: Detected lcore 107 as core 26 on socket 1 00:04:18.946 EAL: Detected lcore 108 as core 27 on socket 1 00:04:18.946 EAL: Detected lcore 109 as core 28 on socket 1 00:04:18.946 EAL: Detected lcore 110 as core 29 on socket 1 00:04:18.946 EAL: Detected lcore 111 as core 30 on socket 1 00:04:18.946 EAL: Maximum logical cores by configuration: 128 00:04:18.946 EAL: Detected CPU lcores: 112 00:04:18.946 EAL: Detected NUMA nodes: 2 00:04:18.946 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:18.946 EAL: Detected shared linkage of DPDK 00:04:18.946 EAL: No shared files mode enabled, IPC will be disabled 00:04:18.946 EAL: Bus pci wants IOVA as 'DC' 00:04:18.946 EAL: Buses did not request a specific IOVA mode. 00:04:18.946 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:18.946 EAL: Selected IOVA mode 'VA' 00:04:18.946 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.946 EAL: Probing VFIO support... 00:04:18.946 EAL: IOMMU type 1 (Type 1) is supported 00:04:18.946 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:18.946 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:18.946 EAL: VFIO support initialized 00:04:18.946 EAL: Ask a virtual area of 0x2e000 bytes 00:04:18.946 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:18.946 EAL: Setting up physically contiguous memory... 00:04:18.946 EAL: Setting maximum number of open files to 524288 00:04:18.946 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:18.946 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:18.946 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.947 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:18.947 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.947 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:18.947 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.947 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:18.947 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.947 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:18.947 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.947 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:18.947 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.947 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:18.947 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.947 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:18.947 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.947 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:18.947 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:18.947 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.947 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:18.947 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.947 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:18.947 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.947 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:18.947 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.947 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:18.947 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.947 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:18.947 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.947 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:18.947 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.947 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:18.947 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.947 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:18.947 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:18.947 EAL: Hugepages will be freed exactly as allocated. 00:04:18.947 EAL: No shared files mode enabled, IPC is disabled 00:04:18.947 EAL: No shared files mode enabled, IPC is disabled 00:04:18.947 EAL: TSC frequency is ~2500000 KHz 00:04:18.947 EAL: Main lcore 0 is ready (tid=7f892be83a00;cpuset=[0]) 00:04:18.947 EAL: Trying to obtain current memory policy. 00:04:18.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.947 EAL: Restoring previous memory policy: 0 00:04:18.947 EAL: request: mp_malloc_sync 00:04:18.947 EAL: No shared files mode enabled, IPC is disabled 00:04:18.947 EAL: Heap on socket 0 was expanded by 2MB 00:04:18.947 EAL: No shared files mode enabled, IPC is disabled 00:04:18.947 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:18.947 EAL: Mem event callback 'spdk:(nil)' registered 00:04:19.207 00:04:19.207 00:04:19.207 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.207 http://cunit.sourceforge.net/ 00:04:19.207 00:04:19.207 00:04:19.207 Suite: components_suite 00:04:19.207 Test: vtophys_malloc_test ...passed 00:04:19.207 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:19.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.207 EAL: Restoring previous memory policy: 4 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was expanded by 4MB 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was shrunk by 4MB 00:04:19.207 EAL: Trying to obtain current memory policy. 00:04:19.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.207 EAL: Restoring previous memory policy: 4 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was expanded by 6MB 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was shrunk by 6MB 00:04:19.207 EAL: Trying to obtain current memory policy. 00:04:19.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.207 EAL: Restoring previous memory policy: 4 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was expanded by 10MB 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was shrunk by 10MB 00:04:19.207 EAL: Trying to obtain current memory policy. 00:04:19.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.207 EAL: Restoring previous memory policy: 4 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was expanded by 18MB 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was shrunk by 18MB 00:04:19.207 EAL: Trying to obtain current memory policy. 00:04:19.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.207 EAL: Restoring previous memory policy: 4 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was expanded by 34MB 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was shrunk by 34MB 00:04:19.207 EAL: Trying to obtain current memory policy. 00:04:19.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.207 EAL: Restoring previous memory policy: 4 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was expanded by 66MB 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was shrunk by 66MB 00:04:19.207 EAL: Trying to obtain current memory policy. 00:04:19.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.207 EAL: Restoring previous memory policy: 4 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was expanded by 130MB 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was shrunk by 130MB 00:04:19.207 EAL: Trying to obtain current memory policy. 00:04:19.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.207 EAL: Restoring previous memory policy: 4 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was expanded by 258MB 00:04:19.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.207 EAL: request: mp_malloc_sync 00:04:19.207 EAL: No shared files mode enabled, IPC is disabled 00:04:19.207 EAL: Heap on socket 0 was shrunk by 258MB 00:04:19.207 EAL: Trying to obtain current memory policy. 00:04:19.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.497 EAL: Restoring previous memory policy: 4 00:04:19.497 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.497 EAL: request: mp_malloc_sync 00:04:19.497 EAL: No shared files mode enabled, IPC is disabled 00:04:19.497 EAL: Heap on socket 0 was expanded by 514MB 00:04:19.497 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.497 EAL: request: mp_malloc_sync 00:04:19.497 EAL: No shared files mode enabled, IPC is disabled 00:04:19.497 EAL: Heap on socket 0 was shrunk by 514MB 00:04:19.497 EAL: Trying to obtain current memory policy. 00:04:19.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.755 EAL: Restoring previous memory policy: 4 00:04:19.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.755 EAL: request: mp_malloc_sync 00:04:19.755 EAL: No shared files mode enabled, IPC is disabled 00:04:19.755 EAL: Heap on socket 0 was expanded by 1026MB 00:04:20.013 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.013 EAL: request: mp_malloc_sync 00:04:20.013 EAL: No shared files mode enabled, IPC is disabled 00:04:20.013 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:20.013 passed 00:04:20.013 00:04:20.013 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.013 suites 1 1 n/a 0 0 00:04:20.013 tests 2 2 2 0 0 00:04:20.013 asserts 497 497 497 0 n/a 00:04:20.013 00:04:20.013 Elapsed time = 0.957 seconds 00:04:20.013 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.013 EAL: request: mp_malloc_sync 00:04:20.013 EAL: No shared files mode enabled, IPC is disabled 00:04:20.013 EAL: Heap on socket 0 was shrunk by 2MB 00:04:20.013 EAL: No shared files mode enabled, IPC is disabled 00:04:20.013 EAL: No shared files mode enabled, IPC is disabled 00:04:20.013 EAL: No shared files mode enabled, IPC is disabled 00:04:20.013 00:04:20.013 real 0m1.080s 00:04:20.013 user 0m0.633s 00:04:20.013 sys 0m0.423s 00:04:20.013 19:05:06 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.013 19:05:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:20.013 ************************************ 00:04:20.013 END TEST env_vtophys 00:04:20.013 ************************************ 00:04:20.013 19:05:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:20.013 19:05:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.013 19:05:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.013 19:05:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.013 ************************************ 00:04:20.013 START TEST env_pci 00:04:20.013 ************************************ 00:04:20.013 19:05:06 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:20.272 00:04:20.272 00:04:20.272 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.272 http://cunit.sourceforge.net/ 00:04:20.272 00:04:20.272 00:04:20.272 Suite: pci 00:04:20.272 Test: pci_hook ...[2024-07-24 19:05:06.268859] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1333580 has claimed it 00:04:20.273 EAL: Cannot find device (10000:00:01.0) 00:04:20.273 EAL: Failed to attach device on primary process 00:04:20.273 passed 00:04:20.273 00:04:20.273 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.273 suites 1 1 n/a 0 0 00:04:20.273 tests 1 1 1 0 0 00:04:20.273 asserts 25 25 25 0 n/a 00:04:20.273 00:04:20.273 Elapsed time = 0.035 seconds 00:04:20.273 00:04:20.273 real 0m0.057s 00:04:20.273 user 0m0.016s 00:04:20.273 sys 0m0.041s 00:04:20.273 19:05:06 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.273 19:05:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:20.273 ************************************ 00:04:20.273 END TEST env_pci 00:04:20.273 ************************************ 00:04:20.273 19:05:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:20.273 19:05:06 env -- env/env.sh@15 -- # uname 00:04:20.273 19:05:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:20.273 19:05:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:20.273 19:05:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:20.273 19:05:06 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:20.273 19:05:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.273 19:05:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.273 ************************************ 00:04:20.273 START TEST env_dpdk_post_init 00:04:20.273 ************************************ 00:04:20.273 19:05:06 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:20.273 EAL: Detected CPU lcores: 112 00:04:20.273 EAL: Detected NUMA nodes: 2 00:04:20.273 EAL: Detected shared linkage of DPDK 00:04:20.273 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:20.273 EAL: Selected IOVA mode 'VA' 00:04:20.273 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.273 EAL: VFIO support initialized 00:04:20.273 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:20.532 EAL: Using IOMMU type 1 (Type 1) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:20.532 EAL: Ignore mapping IO port bar(1) 00:04:20.532 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:21.470 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:25.662 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:25.662 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:25.662 Starting DPDK initialization... 00:04:25.662 Starting SPDK post initialization... 00:04:25.662 SPDK NVMe probe 00:04:25.662 Attaching to 0000:d8:00.0 00:04:25.662 Attached to 0000:d8:00.0 00:04:25.662 Cleaning up... 00:04:25.662 00:04:25.662 real 0m4.979s 00:04:25.662 user 0m3.670s 00:04:25.662 sys 0m0.359s 00:04:25.662 19:05:11 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.662 19:05:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.662 ************************************ 00:04:25.662 END TEST env_dpdk_post_init 00:04:25.662 ************************************ 00:04:25.662 19:05:11 env -- env/env.sh@26 -- # uname 00:04:25.662 19:05:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:25.662 19:05:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.662 19:05:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.662 19:05:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.662 19:05:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.662 ************************************ 00:04:25.662 START TEST env_mem_callbacks 00:04:25.662 ************************************ 00:04:25.662 19:05:11 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.663 EAL: Detected CPU lcores: 112 00:04:25.663 EAL: Detected NUMA nodes: 2 00:04:25.663 EAL: Detected shared linkage of DPDK 00:04:25.663 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:25.663 EAL: Selected IOVA mode 'VA' 00:04:25.663 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.663 EAL: VFIO support initialized 00:04:25.663 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.663 00:04:25.663 00:04:25.663 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.663 http://cunit.sourceforge.net/ 00:04:25.663 00:04:25.663 00:04:25.663 Suite: memory 00:04:25.663 Test: test ... 00:04:25.663 register 0x200000200000 2097152 00:04:25.663 malloc 3145728 00:04:25.663 register 0x200000400000 4194304 00:04:25.663 buf 0x200000500000 len 3145728 PASSED 00:04:25.663 malloc 64 00:04:25.663 buf 0x2000004fff40 len 64 PASSED 00:04:25.663 malloc 4194304 00:04:25.663 register 0x200000800000 6291456 00:04:25.663 buf 0x200000a00000 len 4194304 PASSED 00:04:25.663 free 0x200000500000 3145728 00:04:25.663 free 0x2000004fff40 64 00:04:25.663 unregister 0x200000400000 4194304 PASSED 00:04:25.663 free 0x200000a00000 4194304 00:04:25.663 unregister 0x200000800000 6291456 PASSED 00:04:25.663 malloc 8388608 00:04:25.663 register 0x200000400000 10485760 00:04:25.663 buf 0x200000600000 len 8388608 PASSED 00:04:25.663 free 0x200000600000 8388608 00:04:25.663 unregister 0x200000400000 10485760 PASSED 00:04:25.663 passed 00:04:25.663 00:04:25.663 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.663 suites 1 1 n/a 0 0 00:04:25.663 tests 1 1 1 0 0 00:04:25.663 asserts 15 15 15 0 n/a 00:04:25.663 00:04:25.663 Elapsed time = 0.006 seconds 00:04:25.663 00:04:25.663 real 0m0.068s 00:04:25.663 user 0m0.020s 00:04:25.663 sys 0m0.048s 00:04:25.663 19:05:11 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.663 19:05:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:25.663 ************************************ 00:04:25.663 END TEST env_mem_callbacks 00:04:25.663 ************************************ 00:04:25.663 00:04:25.663 real 0m6.850s 00:04:25.663 user 0m4.658s 00:04:25.663 sys 0m1.255s 00:04:25.663 19:05:11 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.663 19:05:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.663 ************************************ 00:04:25.663 END TEST env 00:04:25.663 ************************************ 00:04:25.663 19:05:11 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:25.663 19:05:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.663 19:05:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.663 19:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:25.663 ************************************ 00:04:25.663 START TEST rpc 00:04:25.663 ************************************ 00:04:25.663 19:05:11 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:25.663 * Looking for test storage... 00:04:25.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.663 19:05:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1334619 00:04:25.663 19:05:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.663 19:05:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:25.663 19:05:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1334619 00:04:25.663 19:05:11 rpc -- common/autotest_common.sh@831 -- # '[' -z 1334619 ']' 00:04:25.663 19:05:11 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.663 19:05:11 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:25.663 19:05:11 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.663 19:05:11 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:25.663 19:05:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.663 [2024-07-24 19:05:11.817763] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:04:25.663 [2024-07-24 19:05:11.817812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334619 ] 00:04:25.663 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.663 [2024-07-24 19:05:11.884762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.922 [2024-07-24 19:05:11.955345] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:25.922 [2024-07-24 19:05:11.955387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1334619' to capture a snapshot of events at runtime. 00:04:25.922 [2024-07-24 19:05:11.955397] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:25.922 [2024-07-24 19:05:11.955405] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:25.922 [2024-07-24 19:05:11.955412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1334619 for offline analysis/debug. 00:04:25.922 [2024-07-24 19:05:11.955434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.489 19:05:12 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:26.489 19:05:12 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:26.489 19:05:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:26.489 19:05:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:26.489 19:05:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:26.489 19:05:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:26.489 19:05:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.489 19:05:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.489 19:05:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.489 ************************************ 00:04:26.489 START TEST rpc_integrity 00:04:26.489 ************************************ 00:04:26.489 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:26.489 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.489 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.489 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.489 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.489 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.489 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.489 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.489 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.489 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.489 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.489 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.489 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:26.489 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.489 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.489 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.748 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.748 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.748 { 00:04:26.748 "name": "Malloc0", 00:04:26.748 "aliases": [ 00:04:26.748 "dfaf4959-97c4-49c5-a477-11a0c6875858" 00:04:26.748 ], 00:04:26.748 "product_name": "Malloc disk", 00:04:26.748 "block_size": 512, 00:04:26.748 "num_blocks": 16384, 00:04:26.748 "uuid": "dfaf4959-97c4-49c5-a477-11a0c6875858", 00:04:26.748 "assigned_rate_limits": { 00:04:26.748 "rw_ios_per_sec": 0, 00:04:26.748 "rw_mbytes_per_sec": 0, 00:04:26.748 "r_mbytes_per_sec": 0, 00:04:26.748 "w_mbytes_per_sec": 0 00:04:26.748 }, 00:04:26.748 "claimed": false, 00:04:26.749 "zoned": false, 00:04:26.749 "supported_io_types": { 00:04:26.749 "read": true, 00:04:26.749 "write": true, 00:04:26.749 "unmap": true, 00:04:26.749 "flush": true, 00:04:26.749 "reset": true, 00:04:26.749 "nvme_admin": false, 00:04:26.749 "nvme_io": false, 00:04:26.749 "nvme_io_md": false, 00:04:26.749 "write_zeroes": true, 00:04:26.749 "zcopy": true, 00:04:26.749 "get_zone_info": false, 00:04:26.749 "zone_management": false, 00:04:26.749 "zone_append": false, 00:04:26.749 "compare": false, 00:04:26.749 "compare_and_write": false, 00:04:26.749 "abort": true, 00:04:26.749 "seek_hole": false, 00:04:26.749 "seek_data": false, 00:04:26.749 "copy": true, 00:04:26.749 "nvme_iov_md": false 00:04:26.749 }, 00:04:26.749 "memory_domains": [ 00:04:26.749 { 00:04:26.749 "dma_device_id": "system", 00:04:26.749 "dma_device_type": 1 00:04:26.749 }, 00:04:26.749 { 00:04:26.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.749 "dma_device_type": 2 00:04:26.749 } 00:04:26.749 ], 00:04:26.749 "driver_specific": {} 00:04:26.749 } 00:04:26.749 ]' 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.749 [2024-07-24 19:05:12.779192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:26.749 [2024-07-24 19:05:12.779229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.749 [2024-07-24 19:05:12.779243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeac440 00:04:26.749 [2024-07-24 19:05:12.779252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.749 [2024-07-24 19:05:12.780324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.749 [2024-07-24 19:05:12.780347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.749 Passthru0 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.749 { 00:04:26.749 "name": "Malloc0", 00:04:26.749 "aliases": [ 00:04:26.749 "dfaf4959-97c4-49c5-a477-11a0c6875858" 00:04:26.749 ], 00:04:26.749 "product_name": "Malloc disk", 00:04:26.749 "block_size": 512, 00:04:26.749 "num_blocks": 16384, 00:04:26.749 "uuid": "dfaf4959-97c4-49c5-a477-11a0c6875858", 00:04:26.749 "assigned_rate_limits": { 00:04:26.749 "rw_ios_per_sec": 0, 00:04:26.749 "rw_mbytes_per_sec": 0, 00:04:26.749 "r_mbytes_per_sec": 0, 00:04:26.749 "w_mbytes_per_sec": 0 00:04:26.749 }, 00:04:26.749 "claimed": true, 00:04:26.749 "claim_type": "exclusive_write", 00:04:26.749 "zoned": false, 00:04:26.749 "supported_io_types": { 00:04:26.749 "read": true, 00:04:26.749 "write": true, 00:04:26.749 "unmap": true, 00:04:26.749 "flush": true, 00:04:26.749 "reset": true, 00:04:26.749 "nvme_admin": false, 00:04:26.749 "nvme_io": false, 00:04:26.749 "nvme_io_md": false, 00:04:26.749 "write_zeroes": true, 00:04:26.749 "zcopy": true, 00:04:26.749 "get_zone_info": false, 00:04:26.749 "zone_management": false, 00:04:26.749 "zone_append": false, 00:04:26.749 "compare": false, 00:04:26.749 "compare_and_write": false, 00:04:26.749 "abort": true, 00:04:26.749 "seek_hole": false, 00:04:26.749 "seek_data": false, 00:04:26.749 "copy": true, 00:04:26.749 "nvme_iov_md": false 00:04:26.749 }, 00:04:26.749 "memory_domains": [ 00:04:26.749 { 00:04:26.749 "dma_device_id": "system", 00:04:26.749 "dma_device_type": 1 00:04:26.749 }, 00:04:26.749 { 00:04:26.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.749 "dma_device_type": 2 00:04:26.749 } 00:04:26.749 ], 00:04:26.749 "driver_specific": {} 00:04:26.749 }, 00:04:26.749 { 00:04:26.749 "name": "Passthru0", 00:04:26.749 "aliases": [ 00:04:26.749 "b88b5d4d-30ae-5262-964a-baaced180de2" 00:04:26.749 ], 00:04:26.749 "product_name": "passthru", 00:04:26.749 "block_size": 512, 00:04:26.749 "num_blocks": 16384, 00:04:26.749 "uuid": "b88b5d4d-30ae-5262-964a-baaced180de2", 00:04:26.749 "assigned_rate_limits": { 00:04:26.749 "rw_ios_per_sec": 0, 00:04:26.749 "rw_mbytes_per_sec": 0, 00:04:26.749 "r_mbytes_per_sec": 0, 00:04:26.749 "w_mbytes_per_sec": 0 00:04:26.749 }, 00:04:26.749 "claimed": false, 00:04:26.749 "zoned": false, 00:04:26.749 "supported_io_types": { 00:04:26.749 "read": true, 00:04:26.749 "write": true, 00:04:26.749 "unmap": true, 00:04:26.749 "flush": true, 00:04:26.749 "reset": true, 00:04:26.749 "nvme_admin": false, 00:04:26.749 "nvme_io": false, 00:04:26.749 "nvme_io_md": false, 00:04:26.749 "write_zeroes": true, 00:04:26.749 "zcopy": true, 00:04:26.749 "get_zone_info": false, 00:04:26.749 "zone_management": false, 00:04:26.749 "zone_append": false, 00:04:26.749 "compare": false, 00:04:26.749 "compare_and_write": false, 00:04:26.749 "abort": true, 00:04:26.749 "seek_hole": false, 00:04:26.749 "seek_data": false, 00:04:26.749 "copy": true, 00:04:26.749 "nvme_iov_md": false 00:04:26.749 }, 00:04:26.749 "memory_domains": [ 00:04:26.749 { 00:04:26.749 "dma_device_id": "system", 00:04:26.749 "dma_device_type": 1 00:04:26.749 }, 00:04:26.749 { 00:04:26.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.749 "dma_device_type": 2 00:04:26.749 } 00:04:26.749 ], 00:04:26.749 "driver_specific": { 00:04:26.749 "passthru": { 00:04:26.749 "name": "Passthru0", 00:04:26.749 "base_bdev_name": "Malloc0" 00:04:26.749 } 00:04:26.749 } 00:04:26.749 } 00:04:26.749 ]' 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.749 19:05:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.749 00:04:26.749 real 0m0.275s 00:04:26.749 user 0m0.171s 00:04:26.749 sys 0m0.050s 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.749 19:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.749 ************************************ 00:04:26.749 END TEST rpc_integrity 00:04:26.749 ************************************ 00:04:26.749 19:05:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:26.749 19:05:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.749 19:05:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.749 19:05:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.009 ************************************ 00:04:27.009 START TEST rpc_plugins 00:04:27.009 ************************************ 00:04:27.009 19:05:12 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:27.009 19:05:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:27.009 19:05:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.009 19:05:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.009 19:05:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.009 19:05:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:27.009 19:05:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:27.009 19:05:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.009 19:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.009 19:05:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.009 19:05:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:27.009 { 00:04:27.009 "name": "Malloc1", 00:04:27.009 "aliases": [ 00:04:27.009 "9bd920e9-7928-4c0b-b0ea-552ab353cba2" 00:04:27.009 ], 00:04:27.009 "product_name": "Malloc disk", 00:04:27.009 "block_size": 4096, 00:04:27.009 "num_blocks": 256, 00:04:27.009 "uuid": "9bd920e9-7928-4c0b-b0ea-552ab353cba2", 00:04:27.009 "assigned_rate_limits": { 00:04:27.009 "rw_ios_per_sec": 0, 00:04:27.009 "rw_mbytes_per_sec": 0, 00:04:27.009 "r_mbytes_per_sec": 0, 00:04:27.009 "w_mbytes_per_sec": 0 00:04:27.009 }, 00:04:27.009 "claimed": false, 00:04:27.009 "zoned": false, 00:04:27.009 "supported_io_types": { 00:04:27.009 "read": true, 00:04:27.009 "write": true, 00:04:27.009 "unmap": true, 00:04:27.009 "flush": true, 00:04:27.009 "reset": true, 00:04:27.009 "nvme_admin": false, 00:04:27.009 "nvme_io": false, 00:04:27.009 "nvme_io_md": false, 00:04:27.009 "write_zeroes": true, 00:04:27.009 "zcopy": true, 00:04:27.009 "get_zone_info": false, 00:04:27.009 "zone_management": false, 00:04:27.009 "zone_append": false, 00:04:27.009 "compare": false, 00:04:27.009 "compare_and_write": false, 00:04:27.009 "abort": true, 00:04:27.009 "seek_hole": false, 00:04:27.009 "seek_data": false, 00:04:27.009 "copy": true, 00:04:27.009 "nvme_iov_md": false 00:04:27.009 }, 00:04:27.009 "memory_domains": [ 00:04:27.009 { 00:04:27.009 "dma_device_id": "system", 00:04:27.009 "dma_device_type": 1 00:04:27.009 }, 00:04:27.009 { 00:04:27.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.009 "dma_device_type": 2 00:04:27.009 } 00:04:27.009 ], 00:04:27.009 "driver_specific": {} 00:04:27.009 } 00:04:27.009 ]' 00:04:27.009 19:05:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:27.009 19:05:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:27.009 19:05:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:27.009 19:05:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.009 19:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.009 19:05:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.009 19:05:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:27.009 19:05:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.009 19:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.009 19:05:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.009 19:05:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:27.009 19:05:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:27.009 19:05:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:27.009 00:04:27.009 real 0m0.142s 00:04:27.009 user 0m0.090s 00:04:27.009 sys 0m0.020s 00:04:27.009 19:05:13 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.009 19:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.009 ************************************ 00:04:27.009 END TEST rpc_plugins 00:04:27.009 ************************************ 00:04:27.009 19:05:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:27.009 19:05:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.009 19:05:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.009 19:05:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.009 ************************************ 00:04:27.009 START TEST rpc_trace_cmd_test 00:04:27.009 ************************************ 00:04:27.009 19:05:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:27.009 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:27.009 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:27.009 19:05:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.009 19:05:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.009 19:05:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.009 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:27.009 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1334619", 00:04:27.009 "tpoint_group_mask": "0x8", 00:04:27.009 "iscsi_conn": { 00:04:27.009 "mask": "0x2", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 }, 00:04:27.009 "scsi": { 00:04:27.009 "mask": "0x4", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 }, 00:04:27.009 "bdev": { 00:04:27.009 "mask": "0x8", 00:04:27.009 "tpoint_mask": "0xffffffffffffffff" 00:04:27.009 }, 00:04:27.009 "nvmf_rdma": { 00:04:27.009 "mask": "0x10", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 }, 00:04:27.009 "nvmf_tcp": { 00:04:27.009 "mask": "0x20", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 }, 00:04:27.009 "ftl": { 00:04:27.009 "mask": "0x40", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 }, 00:04:27.009 "blobfs": { 00:04:27.009 "mask": "0x80", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 }, 00:04:27.009 "dsa": { 00:04:27.009 "mask": "0x200", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 }, 00:04:27.009 "thread": { 00:04:27.009 "mask": "0x400", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 }, 00:04:27.009 "nvme_pcie": { 00:04:27.009 "mask": "0x800", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 }, 00:04:27.009 "iaa": { 00:04:27.009 "mask": "0x1000", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 }, 00:04:27.009 "nvme_tcp": { 00:04:27.009 "mask": "0x2000", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 }, 00:04:27.009 "bdev_nvme": { 00:04:27.009 "mask": "0x4000", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 }, 00:04:27.009 "sock": { 00:04:27.009 "mask": "0x8000", 00:04:27.009 "tpoint_mask": "0x0" 00:04:27.009 } 00:04:27.009 }' 00:04:27.009 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:27.270 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:27.270 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:27.270 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:27.270 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:27.271 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:27.271 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:27.271 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:27.271 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:27.271 19:05:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:27.271 00:04:27.271 real 0m0.204s 00:04:27.271 user 0m0.165s 00:04:27.271 sys 0m0.031s 00:04:27.271 19:05:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.271 19:05:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.271 ************************************ 00:04:27.271 END TEST rpc_trace_cmd_test 00:04:27.271 ************************************ 00:04:27.271 19:05:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:27.271 19:05:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:27.271 19:05:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:27.271 19:05:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.271 19:05:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.271 19:05:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.271 ************************************ 00:04:27.271 START TEST rpc_daemon_integrity 00:04:27.271 ************************************ 00:04:27.271 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:27.271 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.271 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.271 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.533 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.533 { 00:04:27.533 "name": "Malloc2", 00:04:27.533 "aliases": [ 00:04:27.533 "7ec5d413-31ce-4f01-8be7-0fcb8258e245" 00:04:27.533 ], 00:04:27.533 "product_name": "Malloc disk", 00:04:27.533 "block_size": 512, 00:04:27.533 "num_blocks": 16384, 00:04:27.533 "uuid": "7ec5d413-31ce-4f01-8be7-0fcb8258e245", 00:04:27.533 "assigned_rate_limits": { 00:04:27.533 "rw_ios_per_sec": 0, 00:04:27.533 "rw_mbytes_per_sec": 0, 00:04:27.533 "r_mbytes_per_sec": 0, 00:04:27.533 "w_mbytes_per_sec": 0 00:04:27.533 }, 00:04:27.533 "claimed": false, 00:04:27.533 "zoned": false, 00:04:27.533 "supported_io_types": { 00:04:27.533 "read": true, 00:04:27.533 "write": true, 00:04:27.533 "unmap": true, 00:04:27.533 "flush": true, 00:04:27.533 "reset": true, 00:04:27.533 "nvme_admin": false, 00:04:27.533 "nvme_io": false, 00:04:27.533 "nvme_io_md": false, 00:04:27.533 "write_zeroes": true, 00:04:27.533 "zcopy": true, 00:04:27.533 "get_zone_info": false, 00:04:27.533 "zone_management": false, 00:04:27.533 "zone_append": false, 00:04:27.533 "compare": false, 00:04:27.533 "compare_and_write": false, 00:04:27.533 "abort": true, 00:04:27.533 "seek_hole": false, 00:04:27.533 "seek_data": false, 00:04:27.533 "copy": true, 00:04:27.533 "nvme_iov_md": false 00:04:27.533 }, 00:04:27.533 "memory_domains": [ 00:04:27.533 { 00:04:27.533 "dma_device_id": "system", 00:04:27.533 "dma_device_type": 1 00:04:27.533 }, 00:04:27.533 { 00:04:27.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.533 "dma_device_type": 2 00:04:27.533 } 00:04:27.533 ], 00:04:27.534 "driver_specific": {} 00:04:27.534 } 00:04:27.534 ]' 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.534 [2024-07-24 19:05:13.641550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:27.534 [2024-07-24 19:05:13.641579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.534 [2024-07-24 19:05:13.641591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1051e70 00:04:27.534 [2024-07-24 19:05:13.641600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.534 [2024-07-24 19:05:13.642510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.534 [2024-07-24 19:05:13.642533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.534 Passthru0 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.534 { 00:04:27.534 "name": "Malloc2", 00:04:27.534 "aliases": [ 00:04:27.534 "7ec5d413-31ce-4f01-8be7-0fcb8258e245" 00:04:27.534 ], 00:04:27.534 "product_name": "Malloc disk", 00:04:27.534 "block_size": 512, 00:04:27.534 "num_blocks": 16384, 00:04:27.534 "uuid": "7ec5d413-31ce-4f01-8be7-0fcb8258e245", 00:04:27.534 "assigned_rate_limits": { 00:04:27.534 "rw_ios_per_sec": 0, 00:04:27.534 "rw_mbytes_per_sec": 0, 00:04:27.534 "r_mbytes_per_sec": 0, 00:04:27.534 "w_mbytes_per_sec": 0 00:04:27.534 }, 00:04:27.534 "claimed": true, 00:04:27.534 "claim_type": "exclusive_write", 00:04:27.534 "zoned": false, 00:04:27.534 "supported_io_types": { 00:04:27.534 "read": true, 00:04:27.534 "write": true, 00:04:27.534 "unmap": true, 00:04:27.534 "flush": true, 00:04:27.534 "reset": true, 00:04:27.534 "nvme_admin": false, 00:04:27.534 "nvme_io": false, 00:04:27.534 "nvme_io_md": false, 00:04:27.534 "write_zeroes": true, 00:04:27.534 "zcopy": true, 00:04:27.534 "get_zone_info": false, 00:04:27.534 "zone_management": false, 00:04:27.534 "zone_append": false, 00:04:27.534 "compare": false, 00:04:27.534 "compare_and_write": false, 00:04:27.534 "abort": true, 00:04:27.534 "seek_hole": false, 00:04:27.534 "seek_data": false, 00:04:27.534 "copy": true, 00:04:27.534 "nvme_iov_md": false 00:04:27.534 }, 00:04:27.534 "memory_domains": [ 00:04:27.534 { 00:04:27.534 "dma_device_id": "system", 00:04:27.534 "dma_device_type": 1 00:04:27.534 }, 00:04:27.534 { 00:04:27.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.534 "dma_device_type": 2 00:04:27.534 } 00:04:27.534 ], 00:04:27.534 "driver_specific": {} 00:04:27.534 }, 00:04:27.534 { 00:04:27.534 "name": "Passthru0", 00:04:27.534 "aliases": [ 00:04:27.534 "31ef5716-5c95-5252-856b-8785ab41f848" 00:04:27.534 ], 00:04:27.534 "product_name": "passthru", 00:04:27.534 "block_size": 512, 00:04:27.534 "num_blocks": 16384, 00:04:27.534 "uuid": "31ef5716-5c95-5252-856b-8785ab41f848", 00:04:27.534 "assigned_rate_limits": { 00:04:27.534 "rw_ios_per_sec": 0, 00:04:27.534 "rw_mbytes_per_sec": 0, 00:04:27.534 "r_mbytes_per_sec": 0, 00:04:27.534 "w_mbytes_per_sec": 0 00:04:27.534 }, 00:04:27.534 "claimed": false, 00:04:27.534 "zoned": false, 00:04:27.534 "supported_io_types": { 00:04:27.534 "read": true, 00:04:27.534 "write": true, 00:04:27.534 "unmap": true, 00:04:27.534 "flush": true, 00:04:27.534 "reset": true, 00:04:27.534 "nvme_admin": false, 00:04:27.534 "nvme_io": false, 00:04:27.534 "nvme_io_md": false, 00:04:27.534 "write_zeroes": true, 00:04:27.534 "zcopy": true, 00:04:27.534 "get_zone_info": false, 00:04:27.534 "zone_management": false, 00:04:27.534 "zone_append": false, 00:04:27.534 "compare": false, 00:04:27.534 "compare_and_write": false, 00:04:27.534 "abort": true, 00:04:27.534 "seek_hole": false, 00:04:27.534 "seek_data": false, 00:04:27.534 "copy": true, 00:04:27.534 "nvme_iov_md": false 00:04:27.534 }, 00:04:27.534 "memory_domains": [ 00:04:27.534 { 00:04:27.534 "dma_device_id": "system", 00:04:27.534 "dma_device_type": 1 00:04:27.534 }, 00:04:27.534 { 00:04:27.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.534 "dma_device_type": 2 00:04:27.534 } 00:04:27.534 ], 00:04:27.534 "driver_specific": { 00:04:27.534 "passthru": { 00:04:27.534 "name": "Passthru0", 00:04:27.534 "base_bdev_name": "Malloc2" 00:04:27.534 } 00:04:27.534 } 00:04:27.534 } 00:04:27.534 ]' 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.534 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.794 19:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.794 00:04:27.794 real 0m0.280s 00:04:27.794 user 0m0.172s 00:04:27.794 sys 0m0.048s 00:04:27.794 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.794 19:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.794 ************************************ 00:04:27.794 END TEST rpc_daemon_integrity 00:04:27.794 ************************************ 00:04:27.794 19:05:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:27.794 19:05:13 rpc -- rpc/rpc.sh@84 -- # killprocess 1334619 00:04:27.794 19:05:13 rpc -- common/autotest_common.sh@950 -- # '[' -z 1334619 ']' 00:04:27.794 19:05:13 rpc -- common/autotest_common.sh@954 -- # kill -0 1334619 00:04:27.794 19:05:13 rpc -- common/autotest_common.sh@955 -- # uname 00:04:27.794 19:05:13 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:27.794 19:05:13 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334619 00:04:27.794 19:05:13 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:27.794 19:05:13 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:27.794 19:05:13 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334619' 00:04:27.794 killing process with pid 1334619 00:04:27.794 19:05:13 rpc -- common/autotest_common.sh@969 -- # kill 1334619 00:04:27.794 19:05:13 rpc -- common/autotest_common.sh@974 -- # wait 1334619 00:04:28.054 00:04:28.054 real 0m2.508s 00:04:28.054 user 0m3.138s 00:04:28.054 sys 0m0.812s 00:04:28.054 19:05:14 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.054 19:05:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.054 ************************************ 00:04:28.054 END TEST rpc 00:04:28.054 ************************************ 00:04:28.054 19:05:14 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:28.054 19:05:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.054 19:05:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.054 19:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:28.054 ************************************ 00:04:28.054 START TEST skip_rpc 00:04:28.054 ************************************ 00:04:28.054 19:05:14 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:28.314 * Looking for test storage... 00:04:28.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:28.314 19:05:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:28.314 19:05:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:28.314 19:05:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:28.314 19:05:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.314 19:05:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.314 19:05:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.314 ************************************ 00:04:28.314 START TEST skip_rpc 00:04:28.314 ************************************ 00:04:28.314 19:05:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:28.314 19:05:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1335313 00:04:28.314 19:05:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.314 19:05:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:28.314 19:05:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:28.314 [2024-07-24 19:05:14.449487] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:04:28.314 [2024-07-24 19:05:14.449529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335313 ] 00:04:28.314 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.314 [2024-07-24 19:05:14.515166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.573 [2024-07-24 19:05:14.584632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.846 19:05:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1335313 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1335313 ']' 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1335313 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1335313 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1335313' 00:04:33.847 killing process with pid 1335313 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1335313 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1335313 00:04:33.847 00:04:33.847 real 0m5.372s 00:04:33.847 user 0m5.148s 00:04:33.847 sys 0m0.271s 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.847 19:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.847 ************************************ 00:04:33.847 END TEST skip_rpc 00:04:33.847 ************************************ 00:04:33.847 19:05:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:33.847 19:05:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.847 19:05:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.847 19:05:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.847 ************************************ 00:04:33.847 START TEST skip_rpc_with_json 00:04:33.847 ************************************ 00:04:33.847 19:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:33.847 19:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:33.847 19:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1336187 00:04:33.847 19:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.847 19:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.847 19:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1336187 00:04:33.847 19:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1336187 ']' 00:04:33.847 19:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.847 19:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.847 19:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.847 19:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.847 19:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.847 [2024-07-24 19:05:19.889526] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:04:33.847 [2024-07-24 19:05:19.889575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336187 ] 00:04:33.847 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.847 [2024-07-24 19:05:19.958937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.847 [2024-07-24 19:05:20.038328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.786 [2024-07-24 19:05:20.683508] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:34.786 request: 00:04:34.786 { 00:04:34.786 "trtype": "tcp", 00:04:34.786 "method": "nvmf_get_transports", 00:04:34.786 "req_id": 1 00:04:34.786 } 00:04:34.786 Got JSON-RPC error response 00:04:34.786 response: 00:04:34.786 { 00:04:34.786 "code": -19, 00:04:34.786 "message": "No such device" 00:04:34.786 } 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.786 [2024-07-24 19:05:20.695610] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.786 19:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:34.786 { 00:04:34.786 "subsystems": [ 00:04:34.786 { 00:04:34.786 "subsystem": "vfio_user_target", 00:04:34.786 "config": null 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "subsystem": "keyring", 00:04:34.786 "config": [] 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "subsystem": "iobuf", 00:04:34.786 "config": [ 00:04:34.786 { 00:04:34.786 "method": "iobuf_set_options", 00:04:34.786 "params": { 00:04:34.786 "small_pool_count": 8192, 00:04:34.786 "large_pool_count": 1024, 00:04:34.786 "small_bufsize": 8192, 00:04:34.786 "large_bufsize": 135168 00:04:34.786 } 00:04:34.786 } 00:04:34.786 ] 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "subsystem": "sock", 00:04:34.786 "config": [ 00:04:34.786 { 00:04:34.786 "method": "sock_set_default_impl", 00:04:34.786 "params": { 00:04:34.786 "impl_name": "posix" 00:04:34.786 } 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "method": "sock_impl_set_options", 00:04:34.786 "params": { 00:04:34.786 "impl_name": "ssl", 00:04:34.786 "recv_buf_size": 4096, 00:04:34.786 "send_buf_size": 4096, 00:04:34.786 "enable_recv_pipe": true, 00:04:34.786 "enable_quickack": false, 00:04:34.786 "enable_placement_id": 0, 00:04:34.786 "enable_zerocopy_send_server": true, 00:04:34.786 "enable_zerocopy_send_client": false, 00:04:34.786 "zerocopy_threshold": 0, 00:04:34.786 "tls_version": 0, 00:04:34.786 "enable_ktls": false 00:04:34.786 } 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "method": "sock_impl_set_options", 00:04:34.786 "params": { 00:04:34.786 "impl_name": "posix", 00:04:34.786 "recv_buf_size": 2097152, 00:04:34.786 "send_buf_size": 2097152, 00:04:34.786 "enable_recv_pipe": true, 00:04:34.786 "enable_quickack": false, 00:04:34.786 "enable_placement_id": 0, 00:04:34.786 "enable_zerocopy_send_server": true, 00:04:34.786 "enable_zerocopy_send_client": false, 00:04:34.786 "zerocopy_threshold": 0, 00:04:34.786 "tls_version": 0, 00:04:34.786 "enable_ktls": false 00:04:34.786 } 00:04:34.786 } 00:04:34.786 ] 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "subsystem": "vmd", 00:04:34.786 "config": [] 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "subsystem": "accel", 00:04:34.786 "config": [ 00:04:34.786 { 00:04:34.786 "method": "accel_set_options", 00:04:34.786 "params": { 00:04:34.786 "small_cache_size": 128, 00:04:34.786 "large_cache_size": 16, 00:04:34.786 "task_count": 2048, 00:04:34.786 "sequence_count": 2048, 00:04:34.786 "buf_count": 2048 00:04:34.786 } 00:04:34.786 } 00:04:34.786 ] 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "subsystem": "bdev", 00:04:34.786 "config": [ 00:04:34.786 { 00:04:34.786 "method": "bdev_set_options", 00:04:34.786 "params": { 00:04:34.786 "bdev_io_pool_size": 65535, 00:04:34.786 "bdev_io_cache_size": 256, 00:04:34.786 "bdev_auto_examine": true, 00:04:34.786 "iobuf_small_cache_size": 128, 00:04:34.786 "iobuf_large_cache_size": 16 00:04:34.786 } 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "method": "bdev_raid_set_options", 00:04:34.786 "params": { 00:04:34.786 "process_window_size_kb": 1024, 00:04:34.786 "process_max_bandwidth_mb_sec": 0 00:04:34.786 } 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "method": "bdev_iscsi_set_options", 00:04:34.786 "params": { 00:04:34.786 "timeout_sec": 30 00:04:34.786 } 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "method": "bdev_nvme_set_options", 00:04:34.786 "params": { 00:04:34.786 "action_on_timeout": "none", 00:04:34.786 "timeout_us": 0, 00:04:34.786 "timeout_admin_us": 0, 00:04:34.786 "keep_alive_timeout_ms": 10000, 00:04:34.786 "arbitration_burst": 0, 00:04:34.786 "low_priority_weight": 0, 00:04:34.786 "medium_priority_weight": 0, 00:04:34.786 "high_priority_weight": 0, 00:04:34.786 "nvme_adminq_poll_period_us": 10000, 00:04:34.786 "nvme_ioq_poll_period_us": 0, 00:04:34.786 "io_queue_requests": 0, 00:04:34.786 "delay_cmd_submit": true, 00:04:34.786 "transport_retry_count": 4, 00:04:34.786 "bdev_retry_count": 3, 00:04:34.786 "transport_ack_timeout": 0, 00:04:34.786 "ctrlr_loss_timeout_sec": 0, 00:04:34.786 "reconnect_delay_sec": 0, 00:04:34.786 "fast_io_fail_timeout_sec": 0, 00:04:34.786 "disable_auto_failback": false, 00:04:34.786 "generate_uuids": false, 00:04:34.786 "transport_tos": 0, 00:04:34.786 "nvme_error_stat": false, 00:04:34.786 "rdma_srq_size": 0, 00:04:34.786 "io_path_stat": false, 00:04:34.786 "allow_accel_sequence": false, 00:04:34.786 "rdma_max_cq_size": 0, 00:04:34.786 "rdma_cm_event_timeout_ms": 0, 00:04:34.786 "dhchap_digests": [ 00:04:34.786 "sha256", 00:04:34.786 "sha384", 00:04:34.786 "sha512" 00:04:34.786 ], 00:04:34.786 "dhchap_dhgroups": [ 00:04:34.786 "null", 00:04:34.786 "ffdhe2048", 00:04:34.786 "ffdhe3072", 00:04:34.786 "ffdhe4096", 00:04:34.786 "ffdhe6144", 00:04:34.786 "ffdhe8192" 00:04:34.786 ] 00:04:34.786 } 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "method": "bdev_nvme_set_hotplug", 00:04:34.786 "params": { 00:04:34.786 "period_us": 100000, 00:04:34.786 "enable": false 00:04:34.786 } 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "method": "bdev_wait_for_examine" 00:04:34.786 } 00:04:34.786 ] 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "subsystem": "scsi", 00:04:34.786 "config": null 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "subsystem": "scheduler", 00:04:34.786 "config": [ 00:04:34.786 { 00:04:34.786 "method": "framework_set_scheduler", 00:04:34.786 "params": { 00:04:34.786 "name": "static" 00:04:34.786 } 00:04:34.786 } 00:04:34.786 ] 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "subsystem": "vhost_scsi", 00:04:34.786 "config": [] 00:04:34.786 }, 00:04:34.786 { 00:04:34.786 "subsystem": "vhost_blk", 00:04:34.786 "config": [] 00:04:34.786 }, 00:04:34.787 { 00:04:34.787 "subsystem": "ublk", 00:04:34.787 "config": [] 00:04:34.787 }, 00:04:34.787 { 00:04:34.787 "subsystem": "nbd", 00:04:34.787 "config": [] 00:04:34.787 }, 00:04:34.787 { 00:04:34.787 "subsystem": "nvmf", 00:04:34.787 "config": [ 00:04:34.787 { 00:04:34.787 "method": "nvmf_set_config", 00:04:34.787 "params": { 00:04:34.787 "discovery_filter": "match_any", 00:04:34.787 "admin_cmd_passthru": { 00:04:34.787 "identify_ctrlr": false 00:04:34.787 } 00:04:34.787 } 00:04:34.787 }, 00:04:34.787 { 00:04:34.787 "method": "nvmf_set_max_subsystems", 00:04:34.787 "params": { 00:04:34.787 "max_subsystems": 1024 00:04:34.787 } 00:04:34.787 }, 00:04:34.787 { 00:04:34.787 "method": "nvmf_set_crdt", 00:04:34.787 "params": { 00:04:34.787 "crdt1": 0, 00:04:34.787 "crdt2": 0, 00:04:34.787 "crdt3": 0 00:04:34.787 } 00:04:34.787 }, 00:04:34.787 { 00:04:34.787 "method": "nvmf_create_transport", 00:04:34.787 "params": { 00:04:34.787 "trtype": "TCP", 00:04:34.787 "max_queue_depth": 128, 00:04:34.787 "max_io_qpairs_per_ctrlr": 127, 00:04:34.787 "in_capsule_data_size": 4096, 00:04:34.787 "max_io_size": 131072, 00:04:34.787 "io_unit_size": 131072, 00:04:34.787 "max_aq_depth": 128, 00:04:34.787 "num_shared_buffers": 511, 00:04:34.787 "buf_cache_size": 4294967295, 00:04:34.787 "dif_insert_or_strip": false, 00:04:34.787 "zcopy": false, 00:04:34.787 "c2h_success": true, 00:04:34.787 "sock_priority": 0, 00:04:34.787 "abort_timeout_sec": 1, 00:04:34.787 "ack_timeout": 0, 00:04:34.787 "data_wr_pool_size": 0 00:04:34.787 } 00:04:34.787 } 00:04:34.787 ] 00:04:34.787 }, 00:04:34.787 { 00:04:34.787 "subsystem": "iscsi", 00:04:34.787 "config": [ 00:04:34.787 { 00:04:34.787 "method": "iscsi_set_options", 00:04:34.787 "params": { 00:04:34.787 "node_base": "iqn.2016-06.io.spdk", 00:04:34.787 "max_sessions": 128, 00:04:34.787 "max_connections_per_session": 2, 00:04:34.787 "max_queue_depth": 64, 00:04:34.787 "default_time2wait": 2, 00:04:34.787 "default_time2retain": 20, 00:04:34.787 "first_burst_length": 8192, 00:04:34.787 "immediate_data": true, 00:04:34.787 "allow_duplicated_isid": false, 00:04:34.787 "error_recovery_level": 0, 00:04:34.787 "nop_timeout": 60, 00:04:34.787 "nop_in_interval": 30, 00:04:34.787 "disable_chap": false, 00:04:34.787 "require_chap": false, 00:04:34.787 "mutual_chap": false, 00:04:34.787 "chap_group": 0, 00:04:34.787 "max_large_datain_per_connection": 64, 00:04:34.787 "max_r2t_per_connection": 4, 00:04:34.787 "pdu_pool_size": 36864, 00:04:34.787 "immediate_data_pool_size": 16384, 00:04:34.787 "data_out_pool_size": 2048 00:04:34.787 } 00:04:34.787 } 00:04:34.787 ] 00:04:34.787 } 00:04:34.787 ] 00:04:34.787 } 00:04:34.787 19:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:34.787 19:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1336187 00:04:34.787 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1336187 ']' 00:04:34.787 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1336187 00:04:34.787 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:34.787 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:34.787 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1336187 00:04:34.787 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:34.787 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:34.787 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1336187' 00:04:34.787 killing process with pid 1336187 00:04:34.787 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1336187 00:04:34.787 19:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1336187 00:04:35.046 19:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1336420 00:04:35.046 19:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:35.046 19:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.320 19:05:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1336420 00:04:40.320 19:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1336420 ']' 00:04:40.320 19:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1336420 00:04:40.320 19:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:40.320 19:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.320 19:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1336420 00:04:40.320 19:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.320 19:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.320 19:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1336420' 00:04:40.320 killing process with pid 1336420 00:04:40.320 19:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1336420 00:04:40.320 19:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1336420 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.616 00:04:40.616 real 0m6.779s 00:04:40.616 user 0m6.581s 00:04:40.616 sys 0m0.655s 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.616 ************************************ 00:04:40.616 END TEST skip_rpc_with_json 00:04:40.616 ************************************ 00:04:40.616 19:05:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:40.616 19:05:26 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.616 19:05:26 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.616 19:05:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.616 ************************************ 00:04:40.616 START TEST skip_rpc_with_delay 00:04:40.616 ************************************ 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.616 [2024-07-24 19:05:26.736352] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:40.616 [2024-07-24 19:05:26.736417] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.616 00:04:40.616 real 0m0.056s 00:04:40.616 user 0m0.033s 00:04:40.616 sys 0m0.023s 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.616 19:05:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.616 ************************************ 00:04:40.616 END TEST skip_rpc_with_delay 00:04:40.616 ************************************ 00:04:40.616 19:05:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.616 19:05:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.616 19:05:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.616 19:05:26 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.616 19:05:26 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.616 19:05:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.616 ************************************ 00:04:40.616 START TEST exit_on_failed_rpc_init 00:04:40.616 ************************************ 00:04:40.616 19:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:40.616 19:05:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1337526 00:04:40.616 19:05:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1337526 00:04:40.616 19:05:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.616 19:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1337526 ']' 00:04:40.616 19:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.616 19:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.616 19:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.616 19:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.616 19:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.876 [2024-07-24 19:05:26.890649] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:04:40.876 [2024-07-24 19:05:26.890692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337526 ] 00:04:40.876 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.876 [2024-07-24 19:05:26.958801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.876 [2024-07-24 19:05:27.033489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.444 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.444 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:41.445 19:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.445 19:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.703 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:41.703 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.703 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.703 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.703 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.703 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.703 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.703 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.703 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.703 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:41.703 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.703 [2024-07-24 19:05:27.741046] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:04:41.704 [2024-07-24 19:05:27.741097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337651 ] 00:04:41.704 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.704 [2024-07-24 19:05:27.809572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.704 [2024-07-24 19:05:27.877300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.704 [2024-07-24 19:05:27.877368] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:41.704 [2024-07-24 19:05:27.877380] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:41.704 [2024-07-24 19:05:27.877388] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:41.962 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:41.962 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:41.962 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:41.962 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:41.962 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:41.962 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:41.962 19:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:41.963 19:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1337526 00:04:41.963 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1337526 ']' 00:04:41.963 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1337526 00:04:41.963 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:41.963 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.963 19:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1337526 00:04:41.963 19:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.963 19:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.963 19:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1337526' 00:04:41.963 killing process with pid 1337526 00:04:41.963 19:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1337526 00:04:41.963 19:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1337526 00:04:42.222 00:04:42.222 real 0m1.465s 00:04:42.222 user 0m1.664s 00:04:42.222 sys 0m0.439s 00:04:42.222 19:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.222 19:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.222 ************************************ 00:04:42.222 END TEST exit_on_failed_rpc_init 00:04:42.222 ************************************ 00:04:42.222 19:05:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:42.222 00:04:42.222 real 0m14.104s 00:04:42.222 user 0m13.594s 00:04:42.222 sys 0m1.685s 00:04:42.222 19:05:28 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.222 19:05:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.222 ************************************ 00:04:42.222 END TEST skip_rpc 00:04:42.222 ************************************ 00:04:42.222 19:05:28 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:42.222 19:05:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.222 19:05:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.222 19:05:28 -- common/autotest_common.sh@10 -- # set +x 00:04:42.222 ************************************ 00:04:42.222 START TEST rpc_client 00:04:42.222 ************************************ 00:04:42.222 19:05:28 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:42.481 * Looking for test storage... 00:04:42.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:42.481 19:05:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:42.481 OK 00:04:42.481 19:05:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:42.481 00:04:42.481 real 0m0.134s 00:04:42.481 user 0m0.054s 00:04:42.481 sys 0m0.088s 00:04:42.481 19:05:28 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.481 19:05:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:42.481 ************************************ 00:04:42.481 END TEST rpc_client 00:04:42.481 ************************************ 00:04:42.481 19:05:28 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:42.481 19:05:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.481 19:05:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.481 19:05:28 -- common/autotest_common.sh@10 -- # set +x 00:04:42.481 ************************************ 00:04:42.481 START TEST json_config 00:04:42.481 ************************************ 00:04:42.481 19:05:28 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:42.481 19:05:28 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.481 19:05:28 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.481 19:05:28 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.481 19:05:28 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.481 19:05:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.481 19:05:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.481 19:05:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.481 19:05:28 json_config -- paths/export.sh@5 -- # export PATH 00:04:42.481 19:05:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@47 -- # : 0 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:42.481 19:05:28 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:42.481 19:05:28 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:42.481 19:05:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:42.740 INFO: JSON configuration test init 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:42.740 19:05:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.740 19:05:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:42.740 19:05:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.740 19:05:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.740 19:05:28 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:42.740 19:05:28 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.740 19:05:28 json_config -- json_config/common.sh@10 -- # shift 00:04:42.740 19:05:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.740 19:05:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.740 19:05:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.740 19:05:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.740 19:05:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.740 19:05:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1337909 00:04:42.740 19:05:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.740 Waiting for target to run... 00:04:42.740 19:05:28 json_config -- json_config/common.sh@25 -- # waitforlisten 1337909 /var/tmp/spdk_tgt.sock 00:04:42.740 19:05:28 json_config -- common/autotest_common.sh@831 -- # '[' -z 1337909 ']' 00:04:42.740 19:05:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:42.740 19:05:28 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.740 19:05:28 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.741 19:05:28 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.741 19:05:28 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.741 19:05:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.741 [2024-07-24 19:05:28.789373] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:04:42.741 [2024-07-24 19:05:28.789421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337909 ] 00:04:42.741 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.999 [2024-07-24 19:05:29.217498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.258 [2024-07-24 19:05:29.299749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.517 19:05:29 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.517 19:05:29 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:43.517 19:05:29 json_config -- json_config/common.sh@26 -- # echo '' 00:04:43.517 00:04:43.517 19:05:29 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:43.517 19:05:29 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:43.517 19:05:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:43.517 19:05:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.517 19:05:29 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:43.517 19:05:29 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:43.517 19:05:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.517 19:05:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.517 19:05:29 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:43.517 19:05:29 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:43.517 19:05:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:46.871 19:05:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.871 19:05:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:46.871 19:05:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@51 -- # sort 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:46.871 19:05:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.871 19:05:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:46.871 19:05:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.871 19:05:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:46.871 19:05:32 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.871 19:05:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.871 MallocForNvmf0 00:04:47.130 19:05:33 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:47.130 19:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:47.130 MallocForNvmf1 00:04:47.130 19:05:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.130 19:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.387 [2024-07-24 19:05:33.431962] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.387 19:05:33 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.387 19:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.387 19:05:33 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.387 19:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.645 19:05:33 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.645 19:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.904 19:05:33 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.904 19:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.904 [2024-07-24 19:05:34.138191] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:48.162 19:05:34 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:48.162 19:05:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.162 19:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.162 19:05:34 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:48.162 19:05:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.162 19:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.162 19:05:34 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:48.162 19:05:34 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:48.162 19:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:48.162 MallocBdevForConfigChangeCheck 00:04:48.421 19:05:34 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:48.421 19:05:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.421 19:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.421 19:05:34 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:48.421 19:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.679 19:05:34 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:48.679 INFO: shutting down applications... 00:04:48.679 19:05:34 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:48.679 19:05:34 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:48.679 19:05:34 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:48.679 19:05:34 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:50.582 Calling clear_iscsi_subsystem 00:04:50.582 Calling clear_nvmf_subsystem 00:04:50.582 Calling clear_nbd_subsystem 00:04:50.582 Calling clear_ublk_subsystem 00:04:50.582 Calling clear_vhost_blk_subsystem 00:04:50.582 Calling clear_vhost_scsi_subsystem 00:04:50.582 Calling clear_bdev_subsystem 00:04:50.841 19:05:36 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:50.841 19:05:36 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:50.841 19:05:36 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:50.841 19:05:36 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.841 19:05:36 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:50.841 19:05:36 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:51.100 19:05:37 json_config -- json_config/json_config.sh@349 -- # break 00:04:51.100 19:05:37 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:51.100 19:05:37 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:51.100 19:05:37 json_config -- json_config/common.sh@31 -- # local app=target 00:04:51.100 19:05:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.100 19:05:37 json_config -- json_config/common.sh@35 -- # [[ -n 1337909 ]] 00:04:51.100 19:05:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1337909 00:04:51.100 19:05:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.100 19:05:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.100 19:05:37 json_config -- json_config/common.sh@41 -- # kill -0 1337909 00:04:51.100 19:05:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.668 19:05:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.668 19:05:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.668 19:05:37 json_config -- json_config/common.sh@41 -- # kill -0 1337909 00:04:51.668 19:05:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.668 19:05:37 json_config -- json_config/common.sh@43 -- # break 00:04:51.668 19:05:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.668 19:05:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.668 SPDK target shutdown done 00:04:51.668 19:05:37 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:51.668 INFO: relaunching applications... 00:04:51.668 19:05:37 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.668 19:05:37 json_config -- json_config/common.sh@9 -- # local app=target 00:04:51.668 19:05:37 json_config -- json_config/common.sh@10 -- # shift 00:04:51.668 19:05:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.668 19:05:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.668 19:05:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.668 19:05:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.668 19:05:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.668 19:05:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1339615 00:04:51.668 19:05:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.668 Waiting for target to run... 00:04:51.668 19:05:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.668 19:05:37 json_config -- json_config/common.sh@25 -- # waitforlisten 1339615 /var/tmp/spdk_tgt.sock 00:04:51.668 19:05:37 json_config -- common/autotest_common.sh@831 -- # '[' -z 1339615 ']' 00:04:51.668 19:05:37 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.668 19:05:37 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.668 19:05:37 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.668 19:05:37 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.668 19:05:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.668 [2024-07-24 19:05:37.710562] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:04:51.668 [2024-07-24 19:05:37.710634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339615 ] 00:04:51.668 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.927 [2024-07-24 19:05:38.144358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.186 [2024-07-24 19:05:38.232893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.473 [2024-07-24 19:05:41.260660] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:55.473 [2024-07-24 19:05:41.293014] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:55.731 19:05:41 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.731 19:05:41 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:55.731 19:05:41 json_config -- json_config/common.sh@26 -- # echo '' 00:04:55.731 00:04:55.731 19:05:41 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:55.731 19:05:41 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:55.731 INFO: Checking if target configuration is the same... 00:04:55.731 19:05:41 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.731 19:05:41 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:55.731 19:05:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.731 + '[' 2 -ne 2 ']' 00:04:55.731 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:55.731 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:55.731 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:55.731 +++ basename /dev/fd/62 00:04:55.731 ++ mktemp /tmp/62.XXX 00:04:55.731 + tmp_file_1=/tmp/62.SiH 00:04:55.731 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.731 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:55.731 + tmp_file_2=/tmp/spdk_tgt_config.json.ZkX 00:04:55.731 + ret=0 00:04:55.731 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.990 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.990 + diff -u /tmp/62.SiH /tmp/spdk_tgt_config.json.ZkX 00:04:55.990 + echo 'INFO: JSON config files are the same' 00:04:55.990 INFO: JSON config files are the same 00:04:55.990 + rm /tmp/62.SiH /tmp/spdk_tgt_config.json.ZkX 00:04:55.990 + exit 0 00:04:55.990 19:05:42 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:55.990 19:05:42 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:55.990 INFO: changing configuration and checking if this can be detected... 00:04:55.990 19:05:42 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:55.990 19:05:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:56.249 19:05:42 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.249 19:05:42 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:56.249 19:05:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.249 + '[' 2 -ne 2 ']' 00:04:56.249 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:56.249 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:56.249 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:56.249 +++ basename /dev/fd/62 00:04:56.249 ++ mktemp /tmp/62.XXX 00:04:56.249 + tmp_file_1=/tmp/62.lyv 00:04:56.249 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.249 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:56.249 + tmp_file_2=/tmp/spdk_tgt_config.json.DKZ 00:04:56.249 + ret=0 00:04:56.249 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.508 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.508 + diff -u /tmp/62.lyv /tmp/spdk_tgt_config.json.DKZ 00:04:56.508 + ret=1 00:04:56.508 + echo '=== Start of file: /tmp/62.lyv ===' 00:04:56.508 + cat /tmp/62.lyv 00:04:56.508 + echo '=== End of file: /tmp/62.lyv ===' 00:04:56.508 + echo '' 00:04:56.508 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DKZ ===' 00:04:56.508 + cat /tmp/spdk_tgt_config.json.DKZ 00:04:56.508 + echo '=== End of file: /tmp/spdk_tgt_config.json.DKZ ===' 00:04:56.508 + echo '' 00:04:56.508 + rm /tmp/62.lyv /tmp/spdk_tgt_config.json.DKZ 00:04:56.508 + exit 1 00:04:56.508 19:05:42 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:56.508 INFO: configuration change detected. 00:04:56.508 19:05:42 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:56.508 19:05:42 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:56.508 19:05:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.508 19:05:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.508 19:05:42 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:56.508 19:05:42 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:56.508 19:05:42 json_config -- json_config/json_config.sh@321 -- # [[ -n 1339615 ]] 00:04:56.508 19:05:42 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:56.508 19:05:42 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:56.508 19:05:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.508 19:05:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.767 19:05:42 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:56.767 19:05:42 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:56.767 19:05:42 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:56.767 19:05:42 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:56.767 19:05:42 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:56.767 19:05:42 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:56.767 19:05:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.767 19:05:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.767 19:05:42 json_config -- json_config/json_config.sh@327 -- # killprocess 1339615 00:04:56.767 19:05:42 json_config -- common/autotest_common.sh@950 -- # '[' -z 1339615 ']' 00:04:56.767 19:05:42 json_config -- common/autotest_common.sh@954 -- # kill -0 1339615 00:04:56.767 19:05:42 json_config -- common/autotest_common.sh@955 -- # uname 00:04:56.767 19:05:42 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.767 19:05:42 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1339615 00:04:56.767 19:05:42 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.767 19:05:42 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.767 19:05:42 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1339615' 00:04:56.767 killing process with pid 1339615 00:04:56.767 19:05:42 json_config -- common/autotest_common.sh@969 -- # kill 1339615 00:04:56.767 19:05:42 json_config -- common/autotest_common.sh@974 -- # wait 1339615 00:04:59.312 19:05:44 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.312 19:05:44 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:59.312 19:05:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.312 19:05:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.312 19:05:44 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:59.312 19:05:44 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:59.312 INFO: Success 00:04:59.312 00:04:59.312 real 0m16.356s 00:04:59.312 user 0m16.752s 00:04:59.312 sys 0m2.294s 00:04:59.312 19:05:44 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.312 19:05:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.312 ************************************ 00:04:59.312 END TEST json_config 00:04:59.312 ************************************ 00:04:59.312 19:05:45 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.312 19:05:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.312 19:05:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.312 19:05:45 -- common/autotest_common.sh@10 -- # set +x 00:04:59.312 ************************************ 00:04:59.312 START TEST json_config_extra_key 00:04:59.312 ************************************ 00:04:59.312 19:05:45 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.312 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.312 19:05:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.313 19:05:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.313 19:05:45 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.313 19:05:45 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.313 19:05:45 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.313 19:05:45 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.313 19:05:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.313 19:05:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.313 19:05:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.313 19:05:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:59.313 19:05:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.313 19:05:45 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:59.313 19:05:45 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.313 19:05:45 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.313 19:05:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.313 19:05:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.313 19:05:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.313 19:05:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.313 19:05:45 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.313 19:05:45 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.313 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:59.313 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:59.313 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:59.313 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.313 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:59.313 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.313 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:59.313 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:59.313 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:59.313 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.313 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:59.313 INFO: launching applications... 00:04:59.313 19:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.313 19:05:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:59.313 19:05:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:59.313 19:05:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.313 19:05:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.313 19:05:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.313 19:05:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.313 19:05:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.313 19:05:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1341054 00:04:59.313 19:05:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.313 Waiting for target to run... 00:04:59.313 19:05:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1341054 /var/tmp/spdk_tgt.sock 00:04:59.313 19:05:45 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1341054 ']' 00:04:59.313 19:05:45 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.313 19:05:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.313 19:05:45 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.313 19:05:45 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.313 19:05:45 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.313 19:05:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.313 [2024-07-24 19:05:45.239447] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:04:59.313 [2024-07-24 19:05:45.239503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341054 ] 00:04:59.313 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.572 [2024-07-24 19:05:45.673257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.572 [2024-07-24 19:05:45.761398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.831 19:05:46 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.831 19:05:46 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:59.831 19:05:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:59.831 00:04:59.831 19:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:59.831 INFO: shutting down applications... 00:04:59.831 19:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:59.831 19:05:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:59.831 19:05:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.831 19:05:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1341054 ]] 00:04:59.831 19:05:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1341054 00:04:59.831 19:05:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.831 19:05:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.831 19:05:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1341054 00:04:59.831 19:05:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.399 19:05:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.399 19:05:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.399 19:05:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1341054 00:05:00.399 19:05:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.399 19:05:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:00.399 19:05:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.399 19:05:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.399 SPDK target shutdown done 00:05:00.399 19:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:00.399 Success 00:05:00.399 00:05:00.399 real 0m1.458s 00:05:00.399 user 0m1.046s 00:05:00.399 sys 0m0.548s 00:05:00.399 19:05:46 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.399 19:05:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:00.399 ************************************ 00:05:00.399 END TEST json_config_extra_key 00:05:00.399 ************************************ 00:05:00.399 19:05:46 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.399 19:05:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.399 19:05:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.399 19:05:46 -- common/autotest_common.sh@10 -- # set +x 00:05:00.399 ************************************ 00:05:00.399 START TEST alias_rpc 00:05:00.399 ************************************ 00:05:00.399 19:05:46 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.658 * Looking for test storage... 00:05:00.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:00.658 19:05:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:00.658 19:05:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1341367 00:05:00.658 19:05:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1341367 00:05:00.658 19:05:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.658 19:05:46 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1341367 ']' 00:05:00.658 19:05:46 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.658 19:05:46 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.658 19:05:46 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.658 19:05:46 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.658 19:05:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.658 [2024-07-24 19:05:46.759272] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:00.658 [2024-07-24 19:05:46.759325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341367 ] 00:05:00.659 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.659 [2024-07-24 19:05:46.828350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.918 [2024-07-24 19:05:46.899388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.485 19:05:47 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.485 19:05:47 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:01.485 19:05:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:01.745 19:05:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1341367 00:05:01.745 19:05:47 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1341367 ']' 00:05:01.745 19:05:47 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1341367 00:05:01.745 19:05:47 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:01.745 19:05:47 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.745 19:05:47 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1341367 00:05:01.745 19:05:47 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.745 19:05:47 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.745 19:05:47 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1341367' 00:05:01.745 killing process with pid 1341367 00:05:01.745 19:05:47 alias_rpc -- common/autotest_common.sh@969 -- # kill 1341367 00:05:01.745 19:05:47 alias_rpc -- common/autotest_common.sh@974 -- # wait 1341367 00:05:02.004 00:05:02.004 real 0m1.496s 00:05:02.004 user 0m1.587s 00:05:02.004 sys 0m0.446s 00:05:02.004 19:05:48 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.004 19:05:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.004 ************************************ 00:05:02.004 END TEST alias_rpc 00:05:02.004 ************************************ 00:05:02.004 19:05:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:02.004 19:05:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:02.004 19:05:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.004 19:05:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.004 19:05:48 -- common/autotest_common.sh@10 -- # set +x 00:05:02.004 ************************************ 00:05:02.004 START TEST spdkcli_tcp 00:05:02.004 ************************************ 00:05:02.004 19:05:48 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:02.263 * Looking for test storage... 00:05:02.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:02.263 19:05:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:02.263 19:05:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:02.263 19:05:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:02.263 19:05:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:02.263 19:05:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:02.263 19:05:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:02.263 19:05:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:02.263 19:05:48 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.263 19:05:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.263 19:05:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1341691 00:05:02.263 19:05:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:02.263 19:05:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1341691 00:05:02.263 19:05:48 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1341691 ']' 00:05:02.263 19:05:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.263 19:05:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.263 19:05:48 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.263 19:05:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.263 19:05:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.263 [2024-07-24 19:05:48.347250] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:02.263 [2024-07-24 19:05:48.347300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341691 ] 00:05:02.263 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.263 [2024-07-24 19:05:48.417573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.263 [2024-07-24 19:05:48.491457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.263 [2024-07-24 19:05:48.491459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.202 19:05:49 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.202 19:05:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:03.202 19:05:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:03.202 19:05:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1341766 00:05:03.202 19:05:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:03.202 [ 00:05:03.202 "bdev_malloc_delete", 00:05:03.202 "bdev_malloc_create", 00:05:03.202 "bdev_null_resize", 00:05:03.202 "bdev_null_delete", 00:05:03.202 "bdev_null_create", 00:05:03.202 "bdev_nvme_cuse_unregister", 00:05:03.202 "bdev_nvme_cuse_register", 00:05:03.202 "bdev_opal_new_user", 00:05:03.202 "bdev_opal_set_lock_state", 00:05:03.202 "bdev_opal_delete", 00:05:03.202 "bdev_opal_get_info", 00:05:03.202 "bdev_opal_create", 00:05:03.202 "bdev_nvme_opal_revert", 00:05:03.202 "bdev_nvme_opal_init", 00:05:03.202 "bdev_nvme_send_cmd", 00:05:03.202 "bdev_nvme_get_path_iostat", 00:05:03.202 "bdev_nvme_get_mdns_discovery_info", 00:05:03.202 "bdev_nvme_stop_mdns_discovery", 00:05:03.202 "bdev_nvme_start_mdns_discovery", 00:05:03.202 "bdev_nvme_set_multipath_policy", 00:05:03.202 "bdev_nvme_set_preferred_path", 00:05:03.202 "bdev_nvme_get_io_paths", 00:05:03.202 "bdev_nvme_remove_error_injection", 00:05:03.202 "bdev_nvme_add_error_injection", 00:05:03.202 "bdev_nvme_get_discovery_info", 00:05:03.202 "bdev_nvme_stop_discovery", 00:05:03.202 "bdev_nvme_start_discovery", 00:05:03.202 "bdev_nvme_get_controller_health_info", 00:05:03.202 "bdev_nvme_disable_controller", 00:05:03.202 "bdev_nvme_enable_controller", 00:05:03.202 "bdev_nvme_reset_controller", 00:05:03.202 "bdev_nvme_get_transport_statistics", 00:05:03.202 "bdev_nvme_apply_firmware", 00:05:03.202 "bdev_nvme_detach_controller", 00:05:03.202 "bdev_nvme_get_controllers", 00:05:03.202 "bdev_nvme_attach_controller", 00:05:03.202 "bdev_nvme_set_hotplug", 00:05:03.202 "bdev_nvme_set_options", 00:05:03.202 "bdev_passthru_delete", 00:05:03.202 "bdev_passthru_create", 00:05:03.202 "bdev_lvol_set_parent_bdev", 00:05:03.202 "bdev_lvol_set_parent", 00:05:03.202 "bdev_lvol_check_shallow_copy", 00:05:03.202 "bdev_lvol_start_shallow_copy", 00:05:03.202 "bdev_lvol_grow_lvstore", 00:05:03.202 "bdev_lvol_get_lvols", 00:05:03.202 "bdev_lvol_get_lvstores", 00:05:03.202 "bdev_lvol_delete", 00:05:03.202 "bdev_lvol_set_read_only", 00:05:03.202 "bdev_lvol_resize", 00:05:03.202 "bdev_lvol_decouple_parent", 00:05:03.202 "bdev_lvol_inflate", 00:05:03.202 "bdev_lvol_rename", 00:05:03.202 "bdev_lvol_clone_bdev", 00:05:03.202 "bdev_lvol_clone", 00:05:03.202 "bdev_lvol_snapshot", 00:05:03.202 "bdev_lvol_create", 00:05:03.202 "bdev_lvol_delete_lvstore", 00:05:03.202 "bdev_lvol_rename_lvstore", 00:05:03.202 "bdev_lvol_create_lvstore", 00:05:03.202 "bdev_raid_set_options", 00:05:03.202 "bdev_raid_remove_base_bdev", 00:05:03.202 "bdev_raid_add_base_bdev", 00:05:03.202 "bdev_raid_delete", 00:05:03.202 "bdev_raid_create", 00:05:03.202 "bdev_raid_get_bdevs", 00:05:03.202 "bdev_error_inject_error", 00:05:03.202 "bdev_error_delete", 00:05:03.202 "bdev_error_create", 00:05:03.202 "bdev_split_delete", 00:05:03.202 "bdev_split_create", 00:05:03.202 "bdev_delay_delete", 00:05:03.202 "bdev_delay_create", 00:05:03.202 "bdev_delay_update_latency", 00:05:03.202 "bdev_zone_block_delete", 00:05:03.202 "bdev_zone_block_create", 00:05:03.202 "blobfs_create", 00:05:03.202 "blobfs_detect", 00:05:03.202 "blobfs_set_cache_size", 00:05:03.202 "bdev_aio_delete", 00:05:03.202 "bdev_aio_rescan", 00:05:03.202 "bdev_aio_create", 00:05:03.202 "bdev_ftl_set_property", 00:05:03.202 "bdev_ftl_get_properties", 00:05:03.202 "bdev_ftl_get_stats", 00:05:03.202 "bdev_ftl_unmap", 00:05:03.202 "bdev_ftl_unload", 00:05:03.202 "bdev_ftl_delete", 00:05:03.202 "bdev_ftl_load", 00:05:03.202 "bdev_ftl_create", 00:05:03.202 "bdev_virtio_attach_controller", 00:05:03.202 "bdev_virtio_scsi_get_devices", 00:05:03.202 "bdev_virtio_detach_controller", 00:05:03.202 "bdev_virtio_blk_set_hotplug", 00:05:03.202 "bdev_iscsi_delete", 00:05:03.202 "bdev_iscsi_create", 00:05:03.202 "bdev_iscsi_set_options", 00:05:03.202 "accel_error_inject_error", 00:05:03.202 "ioat_scan_accel_module", 00:05:03.202 "dsa_scan_accel_module", 00:05:03.202 "iaa_scan_accel_module", 00:05:03.202 "vfu_virtio_create_scsi_endpoint", 00:05:03.202 "vfu_virtio_scsi_remove_target", 00:05:03.202 "vfu_virtio_scsi_add_target", 00:05:03.202 "vfu_virtio_create_blk_endpoint", 00:05:03.202 "vfu_virtio_delete_endpoint", 00:05:03.202 "keyring_file_remove_key", 00:05:03.202 "keyring_file_add_key", 00:05:03.202 "keyring_linux_set_options", 00:05:03.202 "iscsi_get_histogram", 00:05:03.202 "iscsi_enable_histogram", 00:05:03.202 "iscsi_set_options", 00:05:03.202 "iscsi_get_auth_groups", 00:05:03.202 "iscsi_auth_group_remove_secret", 00:05:03.202 "iscsi_auth_group_add_secret", 00:05:03.202 "iscsi_delete_auth_group", 00:05:03.202 "iscsi_create_auth_group", 00:05:03.202 "iscsi_set_discovery_auth", 00:05:03.202 "iscsi_get_options", 00:05:03.202 "iscsi_target_node_request_logout", 00:05:03.202 "iscsi_target_node_set_redirect", 00:05:03.202 "iscsi_target_node_set_auth", 00:05:03.202 "iscsi_target_node_add_lun", 00:05:03.202 "iscsi_get_stats", 00:05:03.202 "iscsi_get_connections", 00:05:03.202 "iscsi_portal_group_set_auth", 00:05:03.202 "iscsi_start_portal_group", 00:05:03.202 "iscsi_delete_portal_group", 00:05:03.202 "iscsi_create_portal_group", 00:05:03.202 "iscsi_get_portal_groups", 00:05:03.202 "iscsi_delete_target_node", 00:05:03.202 "iscsi_target_node_remove_pg_ig_maps", 00:05:03.202 "iscsi_target_node_add_pg_ig_maps", 00:05:03.202 "iscsi_create_target_node", 00:05:03.202 "iscsi_get_target_nodes", 00:05:03.202 "iscsi_delete_initiator_group", 00:05:03.202 "iscsi_initiator_group_remove_initiators", 00:05:03.202 "iscsi_initiator_group_add_initiators", 00:05:03.202 "iscsi_create_initiator_group", 00:05:03.202 "iscsi_get_initiator_groups", 00:05:03.202 "nvmf_set_crdt", 00:05:03.202 "nvmf_set_config", 00:05:03.202 "nvmf_set_max_subsystems", 00:05:03.202 "nvmf_stop_mdns_prr", 00:05:03.202 "nvmf_publish_mdns_prr", 00:05:03.202 "nvmf_subsystem_get_listeners", 00:05:03.202 "nvmf_subsystem_get_qpairs", 00:05:03.202 "nvmf_subsystem_get_controllers", 00:05:03.202 "nvmf_get_stats", 00:05:03.202 "nvmf_get_transports", 00:05:03.202 "nvmf_create_transport", 00:05:03.202 "nvmf_get_targets", 00:05:03.202 "nvmf_delete_target", 00:05:03.202 "nvmf_create_target", 00:05:03.202 "nvmf_subsystem_allow_any_host", 00:05:03.202 "nvmf_subsystem_remove_host", 00:05:03.202 "nvmf_subsystem_add_host", 00:05:03.202 "nvmf_ns_remove_host", 00:05:03.202 "nvmf_ns_add_host", 00:05:03.202 "nvmf_subsystem_remove_ns", 00:05:03.202 "nvmf_subsystem_add_ns", 00:05:03.202 "nvmf_subsystem_listener_set_ana_state", 00:05:03.202 "nvmf_discovery_get_referrals", 00:05:03.202 "nvmf_discovery_remove_referral", 00:05:03.202 "nvmf_discovery_add_referral", 00:05:03.202 "nvmf_subsystem_remove_listener", 00:05:03.202 "nvmf_subsystem_add_listener", 00:05:03.202 "nvmf_delete_subsystem", 00:05:03.202 "nvmf_create_subsystem", 00:05:03.202 "nvmf_get_subsystems", 00:05:03.202 "env_dpdk_get_mem_stats", 00:05:03.202 "nbd_get_disks", 00:05:03.202 "nbd_stop_disk", 00:05:03.202 "nbd_start_disk", 00:05:03.202 "ublk_recover_disk", 00:05:03.202 "ublk_get_disks", 00:05:03.202 "ublk_stop_disk", 00:05:03.202 "ublk_start_disk", 00:05:03.202 "ublk_destroy_target", 00:05:03.202 "ublk_create_target", 00:05:03.202 "virtio_blk_create_transport", 00:05:03.202 "virtio_blk_get_transports", 00:05:03.202 "vhost_controller_set_coalescing", 00:05:03.202 "vhost_get_controllers", 00:05:03.202 "vhost_delete_controller", 00:05:03.202 "vhost_create_blk_controller", 00:05:03.202 "vhost_scsi_controller_remove_target", 00:05:03.202 "vhost_scsi_controller_add_target", 00:05:03.202 "vhost_start_scsi_controller", 00:05:03.202 "vhost_create_scsi_controller", 00:05:03.202 "thread_set_cpumask", 00:05:03.202 "framework_get_governor", 00:05:03.202 "framework_get_scheduler", 00:05:03.202 "framework_set_scheduler", 00:05:03.202 "framework_get_reactors", 00:05:03.202 "thread_get_io_channels", 00:05:03.202 "thread_get_pollers", 00:05:03.202 "thread_get_stats", 00:05:03.202 "framework_monitor_context_switch", 00:05:03.202 "spdk_kill_instance", 00:05:03.202 "log_enable_timestamps", 00:05:03.202 "log_get_flags", 00:05:03.202 "log_clear_flag", 00:05:03.202 "log_set_flag", 00:05:03.202 "log_get_level", 00:05:03.203 "log_set_level", 00:05:03.203 "log_get_print_level", 00:05:03.203 "log_set_print_level", 00:05:03.203 "framework_enable_cpumask_locks", 00:05:03.203 "framework_disable_cpumask_locks", 00:05:03.203 "framework_wait_init", 00:05:03.203 "framework_start_init", 00:05:03.203 "scsi_get_devices", 00:05:03.203 "bdev_get_histogram", 00:05:03.203 "bdev_enable_histogram", 00:05:03.203 "bdev_set_qos_limit", 00:05:03.203 "bdev_set_qd_sampling_period", 00:05:03.203 "bdev_get_bdevs", 00:05:03.203 "bdev_reset_iostat", 00:05:03.203 "bdev_get_iostat", 00:05:03.203 "bdev_examine", 00:05:03.203 "bdev_wait_for_examine", 00:05:03.203 "bdev_set_options", 00:05:03.203 "notify_get_notifications", 00:05:03.203 "notify_get_types", 00:05:03.203 "accel_get_stats", 00:05:03.203 "accel_set_options", 00:05:03.203 "accel_set_driver", 00:05:03.203 "accel_crypto_key_destroy", 00:05:03.203 "accel_crypto_keys_get", 00:05:03.203 "accel_crypto_key_create", 00:05:03.203 "accel_assign_opc", 00:05:03.203 "accel_get_module_info", 00:05:03.203 "accel_get_opc_assignments", 00:05:03.203 "vmd_rescan", 00:05:03.203 "vmd_remove_device", 00:05:03.203 "vmd_enable", 00:05:03.203 "sock_get_default_impl", 00:05:03.203 "sock_set_default_impl", 00:05:03.203 "sock_impl_set_options", 00:05:03.203 "sock_impl_get_options", 00:05:03.203 "iobuf_get_stats", 00:05:03.203 "iobuf_set_options", 00:05:03.203 "keyring_get_keys", 00:05:03.203 "framework_get_pci_devices", 00:05:03.203 "framework_get_config", 00:05:03.203 "framework_get_subsystems", 00:05:03.203 "vfu_tgt_set_base_path", 00:05:03.203 "trace_get_info", 00:05:03.203 "trace_get_tpoint_group_mask", 00:05:03.203 "trace_disable_tpoint_group", 00:05:03.203 "trace_enable_tpoint_group", 00:05:03.203 "trace_clear_tpoint_mask", 00:05:03.203 "trace_set_tpoint_mask", 00:05:03.203 "spdk_get_version", 00:05:03.203 "rpc_get_methods" 00:05:03.203 ] 00:05:03.203 19:05:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:03.203 19:05:49 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:03.203 19:05:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.203 19:05:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:03.203 19:05:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1341691 00:05:03.203 19:05:49 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1341691 ']' 00:05:03.203 19:05:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1341691 00:05:03.203 19:05:49 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:03.203 19:05:49 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.203 19:05:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1341691 00:05:03.203 19:05:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.203 19:05:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.203 19:05:49 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1341691' 00:05:03.203 killing process with pid 1341691 00:05:03.203 19:05:49 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1341691 00:05:03.203 19:05:49 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1341691 00:05:03.771 00:05:03.771 real 0m1.530s 00:05:03.771 user 0m2.727s 00:05:03.771 sys 0m0.533s 00:05:03.771 19:05:49 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.771 19:05:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.771 ************************************ 00:05:03.771 END TEST spdkcli_tcp 00:05:03.771 ************************************ 00:05:03.771 19:05:49 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.771 19:05:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.771 19:05:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.771 19:05:49 -- common/autotest_common.sh@10 -- # set +x 00:05:03.771 ************************************ 00:05:03.771 START TEST dpdk_mem_utility 00:05:03.771 ************************************ 00:05:03.771 19:05:49 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.771 * Looking for test storage... 00:05:03.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:03.771 19:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.771 19:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1342019 00:05:03.771 19:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.771 19:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1342019 00:05:03.771 19:05:49 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1342019 ']' 00:05:03.771 19:05:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.771 19:05:49 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.771 19:05:49 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.771 19:05:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.771 19:05:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.771 [2024-07-24 19:05:49.945046] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:03.771 [2024-07-24 19:05:49.945096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342019 ] 00:05:03.771 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.030 [2024-07-24 19:05:50.015497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.030 [2024-07-24 19:05:50.096293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.598 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.598 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:04.598 19:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:04.598 19:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:04.598 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.598 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.598 { 00:05:04.598 "filename": "/tmp/spdk_mem_dump.txt" 00:05:04.598 } 00:05:04.598 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.598 19:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:04.598 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:04.598 1 heaps totaling size 814.000000 MiB 00:05:04.598 size: 814.000000 MiB heap id: 0 00:05:04.598 end heaps---------- 00:05:04.598 8 mempools totaling size 598.116089 MiB 00:05:04.598 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:04.598 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:04.598 size: 84.521057 MiB name: bdev_io_1342019 00:05:04.598 size: 51.011292 MiB name: evtpool_1342019 00:05:04.598 size: 50.003479 MiB name: msgpool_1342019 00:05:04.598 size: 21.763794 MiB name: PDU_Pool 00:05:04.598 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:04.598 size: 0.026123 MiB name: Session_Pool 00:05:04.598 end mempools------- 00:05:04.598 6 memzones totaling size 4.142822 MiB 00:05:04.598 size: 1.000366 MiB name: RG_ring_0_1342019 00:05:04.598 size: 1.000366 MiB name: RG_ring_1_1342019 00:05:04.598 size: 1.000366 MiB name: RG_ring_4_1342019 00:05:04.598 size: 1.000366 MiB name: RG_ring_5_1342019 00:05:04.598 size: 0.125366 MiB name: RG_ring_2_1342019 00:05:04.598 size: 0.015991 MiB name: RG_ring_3_1342019 00:05:04.598 end memzones------- 00:05:04.598 19:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:04.598 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:04.598 list of free elements. size: 12.519348 MiB 00:05:04.598 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:04.598 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:04.598 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:04.598 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:04.598 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:04.598 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:04.598 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:04.598 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:04.598 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:04.598 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:04.598 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:04.598 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:04.598 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:04.598 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:04.598 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:04.598 list of standard malloc elements. size: 199.218079 MiB 00:05:04.598 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:04.599 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:04.599 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:04.599 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:04.599 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:04.599 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:04.599 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:04.599 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:04.599 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:04.599 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:04.599 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:04.599 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:04.599 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:04.599 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:04.599 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:04.599 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:04.599 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:04.599 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:04.599 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:04.599 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:04.599 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:04.599 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:04.599 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:04.599 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:04.599 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:04.599 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:04.599 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:04.599 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:04.599 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:04.599 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:04.599 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:04.599 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:04.599 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:04.599 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:04.599 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:04.599 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:04.599 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:04.599 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:04.599 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:04.599 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:04.599 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:04.599 list of memzone associated elements. size: 602.262573 MiB 00:05:04.599 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:04.599 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:04.599 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:04.599 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:04.599 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:04.599 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1342019_0 00:05:04.599 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:04.599 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1342019_0 00:05:04.599 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:04.599 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1342019_0 00:05:04.599 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:04.599 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:04.599 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:04.599 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:04.599 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:04.599 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1342019 00:05:04.599 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:04.599 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1342019 00:05:04.599 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:04.599 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1342019 00:05:04.599 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:04.599 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:04.599 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:04.599 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:04.599 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:04.599 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:04.599 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:04.599 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:04.599 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:04.599 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1342019 00:05:04.599 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:04.599 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1342019 00:05:04.599 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:04.599 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1342019 00:05:04.599 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:04.599 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1342019 00:05:04.599 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:04.599 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1342019 00:05:04.599 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:04.599 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:04.599 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:04.599 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:04.599 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:04.599 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:04.599 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:04.599 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1342019 00:05:04.599 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:04.599 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:04.599 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:04.599 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:04.599 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:04.599 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1342019 00:05:04.599 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:04.599 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:04.599 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:04.599 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1342019 00:05:04.599 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:04.599 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1342019 00:05:04.599 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:04.599 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:04.859 19:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:04.859 19:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1342019 00:05:04.859 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1342019 ']' 00:05:04.859 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1342019 00:05:04.859 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:04.859 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.859 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1342019 00:05:04.859 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.859 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.859 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1342019' 00:05:04.859 killing process with pid 1342019 00:05:04.859 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1342019 00:05:04.859 19:05:50 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1342019 00:05:05.119 00:05:05.119 real 0m1.416s 00:05:05.119 user 0m1.448s 00:05:05.119 sys 0m0.442s 00:05:05.119 19:05:51 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.119 19:05:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.119 ************************************ 00:05:05.119 END TEST dpdk_mem_utility 00:05:05.119 ************************************ 00:05:05.119 19:05:51 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:05.119 19:05:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.119 19:05:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.119 19:05:51 -- common/autotest_common.sh@10 -- # set +x 00:05:05.119 ************************************ 00:05:05.119 START TEST event 00:05:05.119 ************************************ 00:05:05.119 19:05:51 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:05.119 * Looking for test storage... 00:05:05.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:05.378 19:05:51 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:05.378 19:05:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:05.378 19:05:51 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.378 19:05:51 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:05.378 19:05:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.378 19:05:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.378 ************************************ 00:05:05.378 START TEST event_perf 00:05:05.378 ************************************ 00:05:05.378 19:05:51 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.378 Running I/O for 1 seconds...[2024-07-24 19:05:51.423857] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:05.378 [2024-07-24 19:05:51.423930] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342347 ] 00:05:05.378 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.378 [2024-07-24 19:05:51.496018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:05.378 [2024-07-24 19:05:51.567580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.378 [2024-07-24 19:05:51.567677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.378 [2024-07-24 19:05:51.567744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.378 [2024-07-24 19:05:51.567746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.756 Running I/O for 1 seconds... 00:05:06.756 lcore 0: 215643 00:05:06.756 lcore 1: 215643 00:05:06.756 lcore 2: 215644 00:05:06.756 lcore 3: 215643 00:05:06.756 done. 00:05:06.756 00:05:06.756 real 0m1.232s 00:05:06.756 user 0m4.139s 00:05:06.756 sys 0m0.090s 00:05:06.756 19:05:52 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.756 19:05:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.756 ************************************ 00:05:06.756 END TEST event_perf 00:05:06.756 ************************************ 00:05:06.756 19:05:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:06.756 19:05:52 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:06.756 19:05:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.756 19:05:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.756 ************************************ 00:05:06.756 START TEST event_reactor 00:05:06.756 ************************************ 00:05:06.756 19:05:52 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:06.756 [2024-07-24 19:05:52.720128] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:06.756 [2024-07-24 19:05:52.720178] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342629 ] 00:05:06.756 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.756 [2024-07-24 19:05:52.787082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.756 [2024-07-24 19:05:52.853548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.703 test_start 00:05:07.703 oneshot 00:05:07.703 tick 100 00:05:07.703 tick 100 00:05:07.703 tick 250 00:05:07.703 tick 100 00:05:07.703 tick 100 00:05:07.703 tick 250 00:05:07.703 tick 100 00:05:07.703 tick 500 00:05:07.703 tick 100 00:05:07.703 tick 100 00:05:07.703 tick 250 00:05:07.703 tick 100 00:05:07.703 tick 100 00:05:07.703 test_end 00:05:07.703 00:05:07.703 real 0m1.206s 00:05:07.703 user 0m1.126s 00:05:07.703 sys 0m0.076s 00:05:07.703 19:05:53 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.703 19:05:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:07.703 ************************************ 00:05:07.703 END TEST event_reactor 00:05:07.703 ************************************ 00:05:07.962 19:05:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.962 19:05:53 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:07.962 19:05:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.962 19:05:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.962 ************************************ 00:05:07.962 START TEST event_reactor_perf 00:05:07.962 ************************************ 00:05:07.962 19:05:53 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.962 [2024-07-24 19:05:54.017820] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:07.962 [2024-07-24 19:05:54.017902] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342836 ] 00:05:07.962 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.962 [2024-07-24 19:05:54.089968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.962 [2024-07-24 19:05:54.158696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.341 test_start 00:05:09.341 test_end 00:05:09.341 Performance: 538965 events per second 00:05:09.341 00:05:09.341 real 0m1.226s 00:05:09.341 user 0m1.138s 00:05:09.341 sys 0m0.084s 00:05:09.341 19:05:55 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.341 19:05:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.341 ************************************ 00:05:09.341 END TEST event_reactor_perf 00:05:09.341 ************************************ 00:05:09.341 19:05:55 event -- event/event.sh@49 -- # uname -s 00:05:09.341 19:05:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:09.341 19:05:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:09.341 19:05:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.341 19:05:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.341 19:05:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.341 ************************************ 00:05:09.341 START TEST event_scheduler 00:05:09.341 ************************************ 00:05:09.341 19:05:55 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:09.341 * Looking for test storage... 00:05:09.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:09.341 19:05:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:09.341 19:05:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1343091 00:05:09.341 19:05:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.341 19:05:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:09.341 19:05:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1343091 00:05:09.341 19:05:55 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1343091 ']' 00:05:09.341 19:05:55 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.341 19:05:55 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.341 19:05:55 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.341 19:05:55 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.341 19:05:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.341 [2024-07-24 19:05:55.445529] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:09.341 [2024-07-24 19:05:55.445582] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343091 ] 00:05:09.341 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.341 [2024-07-24 19:05:55.515618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:09.601 [2024-07-24 19:05:55.593294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.601 [2024-07-24 19:05:55.593376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.601 [2024-07-24 19:05:55.593441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.601 [2024-07-24 19:05:55.593443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.171 19:05:56 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.171 19:05:56 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:10.171 19:05:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:10.171 19:05:56 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.171 19:05:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.171 [2024-07-24 19:05:56.255759] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:10.171 [2024-07-24 19:05:56.255779] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:10.171 [2024-07-24 19:05:56.255790] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:10.171 [2024-07-24 19:05:56.255797] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:10.171 [2024-07-24 19:05:56.255805] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:10.171 19:05:56 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.171 19:05:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:10.171 19:05:56 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.171 19:05:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.171 [2024-07-24 19:05:56.327799] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:10.171 19:05:56 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.171 19:05:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:10.171 19:05:56 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.171 19:05:56 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.171 19:05:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.171 ************************************ 00:05:10.171 START TEST scheduler_create_thread 00:05:10.171 ************************************ 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.171 2 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.171 3 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.171 4 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.171 5 00:05:10.171 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.493 6 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.493 7 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.493 8 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.493 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.494 9 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.494 10 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.494 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.753 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.753 19:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:10.753 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.753 19:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.657 19:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.657 19:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:12.657 19:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:12.657 19:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.657 19:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.593 19:05:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.593 00:05:13.593 real 0m3.100s 00:05:13.593 user 0m0.025s 00:05:13.593 sys 0m0.006s 00:05:13.593 19:05:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.593 19:05:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.593 ************************************ 00:05:13.593 END TEST scheduler_create_thread 00:05:13.593 ************************************ 00:05:13.593 19:05:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:13.593 19:05:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1343091 00:05:13.593 19:05:59 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1343091 ']' 00:05:13.593 19:05:59 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1343091 00:05:13.593 19:05:59 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:13.593 19:05:59 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.593 19:05:59 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1343091 00:05:13.593 19:05:59 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:13.593 19:05:59 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:13.593 19:05:59 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1343091' 00:05:13.593 killing process with pid 1343091 00:05:13.593 19:05:59 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1343091 00:05:13.593 19:05:59 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1343091 00:05:13.852 [2024-07-24 19:05:59.846982] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:13.852 00:05:13.852 real 0m4.760s 00:05:13.852 user 0m9.172s 00:05:13.852 sys 0m0.436s 00:05:13.852 19:06:00 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.852 19:06:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.852 ************************************ 00:05:13.852 END TEST event_scheduler 00:05:13.852 ************************************ 00:05:14.111 19:06:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:14.111 19:06:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:14.111 19:06:00 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.111 19:06:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.111 19:06:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.111 ************************************ 00:05:14.111 START TEST app_repeat 00:05:14.111 ************************************ 00:05:14.111 19:06:00 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1344014 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1344014' 00:05:14.111 Process app_repeat pid: 1344014 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:14.111 spdk_app_start Round 0 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1344014 /var/tmp/spdk-nbd.sock 00:05:14.111 19:06:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1344014 ']' 00:05:14.111 19:06:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:14.111 19:06:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.111 19:06:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.111 19:06:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.111 19:06:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.111 19:06:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.111 [2024-07-24 19:06:00.183241] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:14.111 [2024-07-24 19:06:00.183305] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344014 ] 00:05:14.111 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.111 [2024-07-24 19:06:00.255392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.111 [2024-07-24 19:06:00.336072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.111 [2024-07-24 19:06:00.336076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.049 19:06:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.049 19:06:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:15.049 19:06:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.049 Malloc0 00:05:15.049 19:06:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.308 Malloc1 00:05:15.308 19:06:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.308 19:06:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.308 /dev/nbd0 00:05:15.568 19:06:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.568 19:06:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.568 1+0 records in 00:05:15.568 1+0 records out 00:05:15.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021254 s, 19.3 MB/s 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:15.568 19:06:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.568 19:06:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.568 19:06:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.568 /dev/nbd1 00:05:15.568 19:06:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.568 19:06:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:15.568 19:06:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.568 1+0 records in 00:05:15.568 1+0 records out 00:05:15.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231823 s, 17.7 MB/s 00:05:15.569 19:06:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.569 19:06:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:15.569 19:06:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.569 19:06:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:15.569 19:06:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:15.569 19:06:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.569 19:06:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.569 19:06:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.569 19:06:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.569 19:06:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.828 19:06:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.828 { 00:05:15.828 "nbd_device": "/dev/nbd0", 00:05:15.828 "bdev_name": "Malloc0" 00:05:15.828 }, 00:05:15.828 { 00:05:15.828 "nbd_device": "/dev/nbd1", 00:05:15.828 "bdev_name": "Malloc1" 00:05:15.828 } 00:05:15.828 ]' 00:05:15.828 19:06:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.828 { 00:05:15.828 "nbd_device": "/dev/nbd0", 00:05:15.828 "bdev_name": "Malloc0" 00:05:15.828 }, 00:05:15.828 { 00:05:15.828 "nbd_device": "/dev/nbd1", 00:05:15.828 "bdev_name": "Malloc1" 00:05:15.828 } 00:05:15.828 ]' 00:05:15.828 19:06:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.828 /dev/nbd1' 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.828 /dev/nbd1' 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.828 256+0 records in 00:05:15.828 256+0 records out 00:05:15.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106592 s, 98.4 MB/s 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.828 256+0 records in 00:05:15.828 256+0 records out 00:05:15.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195784 s, 53.6 MB/s 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.828 19:06:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.087 256+0 records in 00:05:16.087 256+0 records out 00:05:16.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210736 s, 49.8 MB/s 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.087 19:06:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.346 19:06:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.346 19:06:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.346 19:06:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.346 19:06:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.346 19:06:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.346 19:06:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.346 19:06:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.346 19:06:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.346 19:06:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.346 19:06:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.346 19:06:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.605 19:06:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.605 19:06:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.605 19:06:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.605 19:06:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.605 19:06:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.605 19:06:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.605 19:06:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.605 19:06:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.605 19:06:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.605 19:06:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.605 19:06:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.605 19:06:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.605 19:06:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.864 19:06:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.864 [2024-07-24 19:06:03.096713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.124 [2024-07-24 19:06:03.165020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.124 [2024-07-24 19:06:03.165024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.124 [2024-07-24 19:06:03.205781] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.124 [2024-07-24 19:06:03.205824] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.412 19:06:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.412 19:06:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:20.412 spdk_app_start Round 1 00:05:20.412 19:06:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1344014 /var/tmp/spdk-nbd.sock 00:05:20.412 19:06:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1344014 ']' 00:05:20.412 19:06:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.412 19:06:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.412 19:06:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.412 19:06:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.412 19:06:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.412 19:06:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.412 19:06:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:20.412 19:06:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.412 Malloc0 00:05:20.412 19:06:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.412 Malloc1 00:05:20.413 19:06:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.413 /dev/nbd0 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.413 19:06:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.413 19:06:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:20.413 19:06:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:20.413 19:06:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:20.413 19:06:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:20.413 19:06:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:20.672 19:06:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:20.672 19:06:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:20.672 19:06:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:20.672 19:06:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.672 1+0 records in 00:05:20.672 1+0 records out 00:05:20.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253865 s, 16.1 MB/s 00:05:20.672 19:06:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.672 19:06:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:20.672 19:06:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.672 19:06:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:20.673 19:06:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.673 19:06:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.673 19:06:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.673 /dev/nbd1 00:05:20.673 19:06:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.673 19:06:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.673 1+0 records in 00:05:20.673 1+0 records out 00:05:20.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184788 s, 22.2 MB/s 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:20.673 19:06:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:20.673 19:06:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.673 19:06:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.673 19:06:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.673 19:06:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.673 19:06:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.931 19:06:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.931 { 00:05:20.931 "nbd_device": "/dev/nbd0", 00:05:20.931 "bdev_name": "Malloc0" 00:05:20.931 }, 00:05:20.931 { 00:05:20.931 "nbd_device": "/dev/nbd1", 00:05:20.931 "bdev_name": "Malloc1" 00:05:20.931 } 00:05:20.931 ]' 00:05:20.931 19:06:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.931 { 00:05:20.931 "nbd_device": "/dev/nbd0", 00:05:20.931 "bdev_name": "Malloc0" 00:05:20.931 }, 00:05:20.931 { 00:05:20.931 "nbd_device": "/dev/nbd1", 00:05:20.931 "bdev_name": "Malloc1" 00:05:20.931 } 00:05:20.931 ]' 00:05:20.931 19:06:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.931 19:06:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.931 /dev/nbd1' 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.932 /dev/nbd1' 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.932 256+0 records in 00:05:20.932 256+0 records out 00:05:20.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011372 s, 92.2 MB/s 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.932 256+0 records in 00:05:20.932 256+0 records out 00:05:20.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197409 s, 53.1 MB/s 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.932 256+0 records in 00:05:20.932 256+0 records out 00:05:20.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146543 s, 71.6 MB/s 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.932 19:06:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.191 19:06:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.450 19:06:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.450 19:06:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.450 19:06:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.450 19:06:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.450 19:06:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.450 19:06:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.450 19:06:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.450 19:06:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.450 19:06:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.450 19:06:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.450 19:06:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.709 19:06:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.709 19:06:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.709 19:06:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.709 19:06:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.709 19:06:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.709 19:06:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.709 19:06:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.709 19:06:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.709 19:06:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.709 19:06:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.709 19:06:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.709 19:06:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.709 19:06:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.968 19:06:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.968 [2024-07-24 19:06:08.181593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.227 [2024-07-24 19:06:08.244222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.227 [2024-07-24 19:06:08.244226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.227 [2024-07-24 19:06:08.285687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.227 [2024-07-24 19:06:08.285732] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.513 19:06:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.513 19:06:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:25.513 spdk_app_start Round 2 00:05:25.513 19:06:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1344014 /var/tmp/spdk-nbd.sock 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1344014 ']' 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:25.513 19:06:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.513 Malloc0 00:05:25.513 19:06:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.513 Malloc1 00:05:25.513 19:06:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.513 /dev/nbd0 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.513 1+0 records in 00:05:25.513 1+0 records out 00:05:25.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022669 s, 18.1 MB/s 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.513 19:06:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.513 19:06:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.774 /dev/nbd1 00:05:25.774 19:06:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.774 19:06:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.774 1+0 records in 00:05:25.774 1+0 records out 00:05:25.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234581 s, 17.5 MB/s 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.774 19:06:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.774 19:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.774 19:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.774 19:06:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.774 19:06:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.774 19:06:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.033 19:06:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.033 { 00:05:26.033 "nbd_device": "/dev/nbd0", 00:05:26.033 "bdev_name": "Malloc0" 00:05:26.033 }, 00:05:26.033 { 00:05:26.033 "nbd_device": "/dev/nbd1", 00:05:26.033 "bdev_name": "Malloc1" 00:05:26.033 } 00:05:26.033 ]' 00:05:26.033 19:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.033 { 00:05:26.033 "nbd_device": "/dev/nbd0", 00:05:26.033 "bdev_name": "Malloc0" 00:05:26.033 }, 00:05:26.033 { 00:05:26.033 "nbd_device": "/dev/nbd1", 00:05:26.033 "bdev_name": "Malloc1" 00:05:26.033 } 00:05:26.033 ]' 00:05:26.033 19:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.033 19:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.033 /dev/nbd1' 00:05:26.033 19:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.033 /dev/nbd1' 00:05:26.033 19:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.033 19:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.033 19:06:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.033 19:06:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.033 19:06:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.034 256+0 records in 00:05:26.034 256+0 records out 00:05:26.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0042365 s, 248 MB/s 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.034 256+0 records in 00:05:26.034 256+0 records out 00:05:26.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135712 s, 77.3 MB/s 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.034 256+0 records in 00:05:26.034 256+0 records out 00:05:26.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208885 s, 50.2 MB/s 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.034 19:06:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.293 19:06:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.293 19:06:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.293 19:06:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.293 19:06:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.293 19:06:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.293 19:06:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.293 19:06:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.293 19:06:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.293 19:06:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.293 19:06:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.551 19:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.810 19:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.810 19:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.810 19:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.810 19:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.810 19:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.810 19:06:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.810 19:06:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.810 19:06:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.811 19:06:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.811 19:06:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.811 19:06:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.069 [2024-07-24 19:06:13.200998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.069 [2024-07-24 19:06:13.262939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.069 [2024-07-24 19:06:13.262944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.069 [2024-07-24 19:06:13.303433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.069 [2024-07-24 19:06:13.303475] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.359 19:06:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1344014 /var/tmp/spdk-nbd.sock 00:05:30.359 19:06:16 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1344014 ']' 00:05:30.359 19:06:16 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.359 19:06:16 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.359 19:06:16 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.359 19:06:16 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.359 19:06:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.359 19:06:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.359 19:06:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:30.360 19:06:16 event.app_repeat -- event/event.sh@39 -- # killprocess 1344014 00:05:30.360 19:06:16 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1344014 ']' 00:05:30.360 19:06:16 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1344014 00:05:30.360 19:06:16 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:30.360 19:06:16 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.360 19:06:16 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1344014 00:05:30.360 19:06:16 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.360 19:06:16 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.360 19:06:16 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1344014' 00:05:30.360 killing process with pid 1344014 00:05:30.360 19:06:16 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1344014 00:05:30.360 19:06:16 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1344014 00:05:30.360 spdk_app_start is called in Round 0. 00:05:30.360 Shutdown signal received, stop current app iteration 00:05:30.360 Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 reinitialization... 00:05:30.360 spdk_app_start is called in Round 1. 00:05:30.360 Shutdown signal received, stop current app iteration 00:05:30.360 Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 reinitialization... 00:05:30.360 spdk_app_start is called in Round 2. 00:05:30.360 Shutdown signal received, stop current app iteration 00:05:30.360 Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 reinitialization... 00:05:30.360 spdk_app_start is called in Round 3. 00:05:30.360 Shutdown signal received, stop current app iteration 00:05:30.360 19:06:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:30.360 19:06:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:30.360 00:05:30.360 real 0m16.258s 00:05:30.360 user 0m34.576s 00:05:30.360 sys 0m2.947s 00:05:30.360 19:06:16 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.360 19:06:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.360 ************************************ 00:05:30.360 END TEST app_repeat 00:05:30.360 ************************************ 00:05:30.360 19:06:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:30.360 19:06:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:30.360 19:06:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.360 19:06:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.360 19:06:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.360 ************************************ 00:05:30.360 START TEST cpu_locks 00:05:30.360 ************************************ 00:05:30.360 19:06:16 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:30.360 * Looking for test storage... 00:05:30.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:30.360 19:06:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:30.360 19:06:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:30.360 19:06:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:30.360 19:06:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:30.360 19:06:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.360 19:06:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.360 19:06:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.619 ************************************ 00:05:30.619 START TEST default_locks 00:05:30.619 ************************************ 00:05:30.619 19:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:30.619 19:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1347506 00:05:30.619 19:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1347506 00:05:30.619 19:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1347506 ']' 00:05:30.619 19:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.619 19:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.619 19:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.619 19:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.619 19:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.619 19:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.619 [2024-07-24 19:06:16.677956] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:30.619 [2024-07-24 19:06:16.678009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347506 ] 00:05:30.619 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.619 [2024-07-24 19:06:16.746982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.619 [2024-07-24 19:06:16.820737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.557 19:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.557 19:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:31.557 19:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1347506 00:05:31.557 19:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1347506 00:05:31.557 19:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.815 lslocks: write error 00:05:31.815 19:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1347506 00:05:31.815 19:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1347506 ']' 00:05:31.815 19:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1347506 00:05:31.815 19:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:31.815 19:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.815 19:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1347506 00:05:31.815 19:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.815 19:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.815 19:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1347506' 00:05:31.815 killing process with pid 1347506 00:05:31.815 19:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1347506 00:05:31.815 19:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1347506 00:05:32.074 19:06:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1347506 00:05:32.074 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:32.074 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1347506 00:05:32.074 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:32.074 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.074 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:32.074 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.074 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1347506 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1347506 ']' 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1347506) - No such process 00:05:32.075 ERROR: process (pid: 1347506) is no longer running 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.075 00:05:32.075 real 0m1.545s 00:05:32.075 user 0m1.585s 00:05:32.075 sys 0m0.535s 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.075 19:06:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.075 ************************************ 00:05:32.075 END TEST default_locks 00:05:32.075 ************************************ 00:05:32.075 19:06:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:32.075 19:06:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.075 19:06:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.075 19:06:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.075 ************************************ 00:05:32.075 START TEST default_locks_via_rpc 00:05:32.075 ************************************ 00:05:32.075 19:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:32.075 19:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1347812 00:05:32.075 19:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1347812 00:05:32.075 19:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.075 19:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1347812 ']' 00:05:32.075 19:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.075 19:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.075 19:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.075 19:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.075 19:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.075 [2024-07-24 19:06:18.309168] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:32.075 [2024-07-24 19:06:18.309215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347812 ] 00:05:32.334 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.334 [2024-07-24 19:06:18.378623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.334 [2024-07-24 19:06:18.451925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1347812 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1347812 00:05:32.902 19:06:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.471 19:06:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1347812 00:05:33.471 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1347812 ']' 00:05:33.471 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1347812 00:05:33.471 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:33.471 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.471 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1347812 00:05:33.471 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.471 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.471 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1347812' 00:05:33.471 killing process with pid 1347812 00:05:33.471 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1347812 00:05:33.471 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1347812 00:05:33.730 00:05:33.730 real 0m1.613s 00:05:33.730 user 0m1.668s 00:05:33.730 sys 0m0.577s 00:05:33.730 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.730 19:06:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.730 ************************************ 00:05:33.730 END TEST default_locks_via_rpc 00:05:33.730 ************************************ 00:05:33.730 19:06:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:33.730 19:06:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.730 19:06:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.730 19:06:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.730 ************************************ 00:05:33.730 START TEST non_locking_app_on_locked_coremask 00:05:33.730 ************************************ 00:05:33.730 19:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:33.730 19:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1348128 00:05:33.730 19:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1348128 /var/tmp/spdk.sock 00:05:33.730 19:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1348128 ']' 00:05:33.730 19:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.730 19:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.730 19:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.730 19:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.730 19:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.730 19:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.989 [2024-07-24 19:06:19.992046] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:33.989 [2024-07-24 19:06:19.992093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1348128 ] 00:05:33.989 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.989 [2024-07-24 19:06:20.063401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.989 [2024-07-24 19:06:20.141090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.557 19:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.557 19:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:34.557 19:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1348363 00:05:34.557 19:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1348363 /var/tmp/spdk2.sock 00:05:34.557 19:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1348363 ']' 00:05:34.557 19:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.557 19:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.557 19:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.557 19:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.557 19:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.557 19:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:34.816 [2024-07-24 19:06:20.828773] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:34.816 [2024-07-24 19:06:20.828829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1348363 ] 00:05:34.816 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.816 [2024-07-24 19:06:20.924882] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.816 [2024-07-24 19:06:20.924907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.075 [2024-07-24 19:06:21.069125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.641 19:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.641 19:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:35.641 19:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1348128 00:05:35.641 19:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.641 19:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1348128 00:05:36.578 lslocks: write error 00:05:36.578 19:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1348128 00:05:36.578 19:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1348128 ']' 00:05:36.578 19:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1348128 00:05:36.578 19:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:36.578 19:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.578 19:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1348128 00:05:36.837 19:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.837 19:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.837 19:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1348128' 00:05:36.837 killing process with pid 1348128 00:05:36.837 19:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1348128 00:05:36.837 19:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1348128 00:05:37.405 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1348363 00:05:37.405 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1348363 ']' 00:05:37.405 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1348363 00:05:37.405 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:37.405 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.405 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1348363 00:05:37.405 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.405 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.405 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1348363' 00:05:37.405 killing process with pid 1348363 00:05:37.405 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1348363 00:05:37.405 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1348363 00:05:37.664 00:05:37.664 real 0m3.879s 00:05:37.664 user 0m4.108s 00:05:37.664 sys 0m1.331s 00:05:37.664 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.664 19:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.664 ************************************ 00:05:37.664 END TEST non_locking_app_on_locked_coremask 00:05:37.664 ************************************ 00:05:37.664 19:06:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:37.664 19:06:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.664 19:06:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.664 19:06:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.664 ************************************ 00:05:37.664 START TEST locking_app_on_unlocked_coremask 00:05:37.664 ************************************ 00:05:37.664 19:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:37.664 19:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1348925 00:05:37.664 19:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1348925 /var/tmp/spdk.sock 00:05:37.924 19:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:37.924 19:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1348925 ']' 00:05:37.924 19:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.924 19:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.924 19:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.924 19:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.924 19:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.924 [2024-07-24 19:06:23.957660] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:37.924 [2024-07-24 19:06:23.957705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1348925 ] 00:05:37.924 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.924 [2024-07-24 19:06:24.026890] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.924 [2024-07-24 19:06:24.026919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.924 [2024-07-24 19:06:24.099530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.862 19:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.862 19:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:38.862 19:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1349043 00:05:38.862 19:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1349043 /var/tmp/spdk2.sock 00:05:38.862 19:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.862 19:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1349043 ']' 00:05:38.862 19:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.862 19:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.862 19:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.862 19:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.862 19:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.862 [2024-07-24 19:06:24.805365] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:38.862 [2024-07-24 19:06:24.805417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1349043 ] 00:05:38.862 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.862 [2024-07-24 19:06:24.905770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.862 [2024-07-24 19:06:25.042513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.431 19:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.431 19:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:39.431 19:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1349043 00:05:39.431 19:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1349043 00:05:39.431 19:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.368 lslocks: write error 00:05:40.368 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1348925 00:05:40.368 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1348925 ']' 00:05:40.368 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1348925 00:05:40.368 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:40.368 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.368 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1348925 00:05:40.368 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.368 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.368 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1348925' 00:05:40.368 killing process with pid 1348925 00:05:40.368 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1348925 00:05:40.368 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1348925 00:05:40.936 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1349043 00:05:40.936 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1349043 ']' 00:05:40.936 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1349043 00:05:40.936 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:40.936 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.936 19:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1349043 00:05:40.937 19:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.937 19:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.937 19:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1349043' 00:05:40.937 killing process with pid 1349043 00:05:40.937 19:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1349043 00:05:40.937 19:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1349043 00:05:41.196 00:05:41.196 real 0m3.435s 00:05:41.196 user 0m3.671s 00:05:41.196 sys 0m1.098s 00:05:41.196 19:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.196 19:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.196 ************************************ 00:05:41.196 END TEST locking_app_on_unlocked_coremask 00:05:41.196 ************************************ 00:05:41.196 19:06:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:41.196 19:06:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.196 19:06:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.196 19:06:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.196 ************************************ 00:05:41.196 START TEST locking_app_on_locked_coremask 00:05:41.196 ************************************ 00:05:41.196 19:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:41.196 19:06:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1349502 00:05:41.196 19:06:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1349502 /var/tmp/spdk.sock 00:05:41.196 19:06:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.196 19:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1349502 ']' 00:05:41.196 19:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.196 19:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.196 19:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.196 19:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.196 19:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.455 [2024-07-24 19:06:27.470501] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:41.455 [2024-07-24 19:06:27.470547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1349502 ] 00:05:41.455 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.455 [2024-07-24 19:06:27.540249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.456 [2024-07-24 19:06:27.602749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1349763 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1349763 /var/tmp/spdk2.sock 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1349763 /var/tmp/spdk2.sock 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1349763 /var/tmp/spdk2.sock 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1349763 ']' 00:05:42.392 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.393 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.393 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.393 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.393 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.393 [2024-07-24 19:06:28.317236] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:42.393 [2024-07-24 19:06:28.317288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1349763 ] 00:05:42.393 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.393 [2024-07-24 19:06:28.412389] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1349502 has claimed it. 00:05:42.393 [2024-07-24 19:06:28.412433] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:42.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1349763) - No such process 00:05:42.960 ERROR: process (pid: 1349763) is no longer running 00:05:42.960 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.960 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:42.960 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:42.960 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.960 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:42.960 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.960 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1349502 00:05:42.960 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1349502 00:05:42.960 19:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.561 lslocks: write error 00:05:43.561 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1349502 00:05:43.561 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1349502 ']' 00:05:43.561 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1349502 00:05:43.561 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:43.561 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.561 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1349502 00:05:43.561 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.561 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.561 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1349502' 00:05:43.561 killing process with pid 1349502 00:05:43.561 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1349502 00:05:43.561 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1349502 00:05:43.820 00:05:43.820 real 0m2.576s 00:05:43.820 user 0m2.829s 00:05:43.820 sys 0m0.820s 00:05:43.820 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.820 19:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.820 ************************************ 00:05:43.820 END TEST locking_app_on_locked_coremask 00:05:43.820 ************************************ 00:05:43.820 19:06:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:43.820 19:06:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.820 19:06:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.820 19:06:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.079 ************************************ 00:05:44.079 START TEST locking_overlapped_coremask 00:05:44.079 ************************************ 00:05:44.079 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:44.079 19:06:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1350065 00:05:44.079 19:06:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:44.079 19:06:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1350065 /var/tmp/spdk.sock 00:05:44.079 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1350065 ']' 00:05:44.079 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.079 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.079 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.079 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.079 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.079 [2024-07-24 19:06:30.123043] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:44.079 [2024-07-24 19:06:30.123093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350065 ] 00:05:44.079 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.079 [2024-07-24 19:06:30.192852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.079 [2024-07-24 19:06:30.264402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.079 [2024-07-24 19:06:30.264518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.079 [2024-07-24 19:06:30.264521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.016 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.016 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:45.016 19:06:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1350181 00:05:45.016 19:06:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1350181 /var/tmp/spdk2.sock 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1350181 /var/tmp/spdk2.sock 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1350181 /var/tmp/spdk2.sock 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1350181 ']' 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.017 19:06:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.017 [2024-07-24 19:06:30.971941] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:45.017 [2024-07-24 19:06:30.971996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350181 ] 00:05:45.017 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.017 [2024-07-24 19:06:31.071609] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1350065 has claimed it. 00:05:45.017 [2024-07-24 19:06:31.071655] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1350181) - No such process 00:05:45.586 ERROR: process (pid: 1350181) is no longer running 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1350065 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1350065 ']' 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1350065 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1350065 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1350065' 00:05:45.586 killing process with pid 1350065 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1350065 00:05:45.586 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1350065 00:05:45.845 00:05:45.845 real 0m1.885s 00:05:45.845 user 0m5.273s 00:05:45.845 sys 0m0.460s 00:05:45.845 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.845 19:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.845 ************************************ 00:05:45.845 END TEST locking_overlapped_coremask 00:05:45.845 ************************************ 00:05:45.845 19:06:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:45.845 19:06:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.845 19:06:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.845 19:06:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.845 ************************************ 00:05:45.845 START TEST locking_overlapped_coremask_via_rpc 00:05:45.845 ************************************ 00:05:45.845 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:45.845 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:45.845 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1350362 00:05:45.845 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1350362 /var/tmp/spdk.sock 00:05:45.845 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1350362 ']' 00:05:45.845 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.845 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.845 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.845 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.845 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.845 [2024-07-24 19:06:32.079665] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:45.845 [2024-07-24 19:06:32.079710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350362 ] 00:05:46.105 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.105 [2024-07-24 19:06:32.150952] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.105 [2024-07-24 19:06:32.150982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.105 [2024-07-24 19:06:32.227081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.105 [2024-07-24 19:06:32.227107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.105 [2024-07-24 19:06:32.227109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.673 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.673 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.673 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1350626 00:05:46.673 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1350626 /var/tmp/spdk2.sock 00:05:46.673 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:46.673 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1350626 ']' 00:05:46.673 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.673 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.673 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.673 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.673 19:06:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.932 [2024-07-24 19:06:32.935113] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:46.932 [2024-07-24 19:06:32.935167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350626 ] 00:05:46.932 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.932 [2024-07-24 19:06:33.033995] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.932 [2024-07-24 19:06:33.034028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.191 [2024-07-24 19:06:33.176982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.191 [2024-07-24 19:06:33.177079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.191 [2024-07-24 19:06:33.177079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:47.759 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.760 [2024-07-24 19:06:33.740788] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1350362 has claimed it. 00:05:47.760 request: 00:05:47.760 { 00:05:47.760 "method": "framework_enable_cpumask_locks", 00:05:47.760 "req_id": 1 00:05:47.760 } 00:05:47.760 Got JSON-RPC error response 00:05:47.760 response: 00:05:47.760 { 00:05:47.760 "code": -32603, 00:05:47.760 "message": "Failed to claim CPU core: 2" 00:05:47.760 } 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1350362 /var/tmp/spdk.sock 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1350362 ']' 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1350626 /var/tmp/spdk2.sock 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1350626 ']' 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.760 19:06:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.020 19:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.020 19:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:48.020 19:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:48.020 19:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:48.020 19:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:48.020 19:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:48.020 00:05:48.020 real 0m2.079s 00:05:48.020 user 0m0.818s 00:05:48.020 sys 0m0.190s 00:05:48.020 19:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.020 19:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.020 ************************************ 00:05:48.020 END TEST locking_overlapped_coremask_via_rpc 00:05:48.020 ************************************ 00:05:48.020 19:06:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:48.020 19:06:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1350362 ]] 00:05:48.020 19:06:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1350362 00:05:48.020 19:06:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1350362 ']' 00:05:48.020 19:06:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1350362 00:05:48.020 19:06:34 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:48.020 19:06:34 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.020 19:06:34 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1350362 00:05:48.020 19:06:34 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.020 19:06:34 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.020 19:06:34 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1350362' 00:05:48.020 killing process with pid 1350362 00:05:48.020 19:06:34 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1350362 00:05:48.020 19:06:34 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1350362 00:05:48.587 19:06:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1350626 ]] 00:05:48.587 19:06:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1350626 00:05:48.587 19:06:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1350626 ']' 00:05:48.587 19:06:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1350626 00:05:48.587 19:06:34 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:48.587 19:06:34 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.587 19:06:34 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1350626 00:05:48.587 19:06:34 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:48.587 19:06:34 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:48.587 19:06:34 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1350626' 00:05:48.587 killing process with pid 1350626 00:05:48.587 19:06:34 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1350626 00:05:48.587 19:06:34 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1350626 00:05:48.847 19:06:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.847 19:06:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:48.847 19:06:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1350362 ]] 00:05:48.847 19:06:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1350362 00:05:48.847 19:06:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1350362 ']' 00:05:48.847 19:06:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1350362 00:05:48.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1350362) - No such process 00:05:48.847 19:06:34 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1350362 is not found' 00:05:48.847 Process with pid 1350362 is not found 00:05:48.847 19:06:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1350626 ]] 00:05:48.847 19:06:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1350626 00:05:48.847 19:06:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1350626 ']' 00:05:48.847 19:06:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1350626 00:05:48.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1350626) - No such process 00:05:48.847 19:06:34 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1350626 is not found' 00:05:48.847 Process with pid 1350626 is not found 00:05:48.847 19:06:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.847 00:05:48.847 real 0m18.427s 00:05:48.847 user 0m30.490s 00:05:48.847 sys 0m6.037s 00:05:48.847 19:06:34 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.847 19:06:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.847 ************************************ 00:05:48.847 END TEST cpu_locks 00:05:48.847 ************************************ 00:05:48.847 00:05:48.847 real 0m43.684s 00:05:48.847 user 1m20.843s 00:05:48.847 sys 0m10.089s 00:05:48.847 19:06:34 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.847 19:06:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.847 ************************************ 00:05:48.847 END TEST event 00:05:48.847 ************************************ 00:05:48.847 19:06:34 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:48.847 19:06:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.847 19:06:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.847 19:06:34 -- common/autotest_common.sh@10 -- # set +x 00:05:48.847 ************************************ 00:05:48.847 START TEST thread 00:05:48.847 ************************************ 00:05:48.847 19:06:35 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:49.107 * Looking for test storage... 00:05:49.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:49.107 19:06:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.107 19:06:35 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:49.107 19:06:35 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.107 19:06:35 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.107 ************************************ 00:05:49.107 START TEST thread_poller_perf 00:05:49.107 ************************************ 00:05:49.107 19:06:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.107 [2024-07-24 19:06:35.204670] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:49.107 [2024-07-24 19:06:35.204885] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351004 ] 00:05:49.107 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.107 [2024-07-24 19:06:35.275723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.107 [2024-07-24 19:06:35.345412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.107 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:50.486 ====================================== 00:05:50.486 busy:2506778350 (cyc) 00:05:50.486 total_run_count: 428000 00:05:50.486 tsc_hz: 2500000000 (cyc) 00:05:50.486 ====================================== 00:05:50.486 poller_cost: 5856 (cyc), 2342 (nsec) 00:05:50.486 00:05:50.486 real 0m1.234s 00:05:50.486 user 0m1.145s 00:05:50.486 sys 0m0.085s 00:05:50.486 19:06:36 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.486 19:06:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.486 ************************************ 00:05:50.486 END TEST thread_poller_perf 00:05:50.486 ************************************ 00:05:50.486 19:06:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:50.486 19:06:36 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:50.486 19:06:36 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.486 19:06:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.486 ************************************ 00:05:50.486 START TEST thread_poller_perf 00:05:50.486 ************************************ 00:05:50.486 19:06:36 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:50.486 [2024-07-24 19:06:36.520125] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:50.486 [2024-07-24 19:06:36.520221] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351285 ] 00:05:50.486 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.486 [2024-07-24 19:06:36.591525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.486 [2024-07-24 19:06:36.658552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.486 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:51.865 ====================================== 00:05:51.865 busy:2501583894 (cyc) 00:05:51.865 total_run_count: 5652000 00:05:51.865 tsc_hz: 2500000000 (cyc) 00:05:51.865 ====================================== 00:05:51.865 poller_cost: 442 (cyc), 176 (nsec) 00:05:51.865 00:05:51.865 real 0m1.227s 00:05:51.865 user 0m1.125s 00:05:51.865 sys 0m0.097s 00:05:51.865 19:06:37 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.865 19:06:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.865 ************************************ 00:05:51.865 END TEST thread_poller_perf 00:05:51.865 ************************************ 00:05:51.865 19:06:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:51.865 00:05:51.865 real 0m2.728s 00:05:51.865 user 0m2.380s 00:05:51.865 sys 0m0.361s 00:05:51.865 19:06:37 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.865 19:06:37 thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.865 ************************************ 00:05:51.865 END TEST thread 00:05:51.865 ************************************ 00:05:51.865 19:06:37 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:05:51.865 19:06:37 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:51.865 19:06:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.865 19:06:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.865 19:06:37 -- common/autotest_common.sh@10 -- # set +x 00:05:51.865 ************************************ 00:05:51.865 START TEST app_cmdline 00:05:51.865 ************************************ 00:05:51.865 19:06:37 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:51.865 * Looking for test storage... 00:05:51.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:51.865 19:06:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:51.865 19:06:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1351602 00:05:51.865 19:06:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1351602 00:05:51.865 19:06:37 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:51.865 19:06:37 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1351602 ']' 00:05:51.865 19:06:37 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.865 19:06:37 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.865 19:06:37 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.865 19:06:37 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.865 19:06:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.865 [2024-07-24 19:06:38.006992] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:05:51.865 [2024-07-24 19:06:38.007039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351602 ] 00:05:51.865 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.865 [2024-07-24 19:06:38.076846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.124 [2024-07-24 19:06:38.150700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.693 19:06:38 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.693 19:06:38 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:52.693 19:06:38 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:52.952 { 00:05:52.952 "version": "SPDK v24.09-pre git sha1 dca21ec0f", 00:05:52.952 "fields": { 00:05:52.952 "major": 24, 00:05:52.952 "minor": 9, 00:05:52.952 "patch": 0, 00:05:52.952 "suffix": "-pre", 00:05:52.952 "commit": "dca21ec0f" 00:05:52.952 } 00:05:52.952 } 00:05:52.952 19:06:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:52.952 19:06:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:52.952 19:06:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:52.953 19:06:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:52.953 19:06:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:52.953 19:06:38 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.953 19:06:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:52.953 19:06:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:52.953 19:06:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:52.953 19:06:38 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.953 19:06:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:52.953 19:06:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:52.953 19:06:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:52.953 request: 00:05:52.953 { 00:05:52.953 "method": "env_dpdk_get_mem_stats", 00:05:52.953 "req_id": 1 00:05:52.953 } 00:05:52.953 Got JSON-RPC error response 00:05:52.953 response: 00:05:52.953 { 00:05:52.953 "code": -32601, 00:05:52.953 "message": "Method not found" 00:05:52.953 } 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.953 19:06:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1351602 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1351602 ']' 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1351602 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.953 19:06:39 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1351602 00:05:53.212 19:06:39 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.212 19:06:39 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.212 19:06:39 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1351602' 00:05:53.212 killing process with pid 1351602 00:05:53.212 19:06:39 app_cmdline -- common/autotest_common.sh@969 -- # kill 1351602 00:05:53.212 19:06:39 app_cmdline -- common/autotest_common.sh@974 -- # wait 1351602 00:05:53.472 00:05:53.472 real 0m1.694s 00:05:53.472 user 0m1.955s 00:05:53.472 sys 0m0.500s 00:05:53.472 19:06:39 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.472 19:06:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:53.472 ************************************ 00:05:53.472 END TEST app_cmdline 00:05:53.472 ************************************ 00:05:53.472 19:06:39 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:53.472 19:06:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.472 19:06:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.472 19:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:53.472 ************************************ 00:05:53.472 START TEST version 00:05:53.472 ************************************ 00:05:53.472 19:06:39 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:53.731 * Looking for test storage... 00:05:53.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:53.731 19:06:39 version -- app/version.sh@17 -- # get_header_version major 00:05:53.731 19:06:39 version -- app/version.sh@14 -- # cut -f2 00:05:53.731 19:06:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:53.731 19:06:39 version -- app/version.sh@14 -- # tr -d '"' 00:05:53.731 19:06:39 version -- app/version.sh@17 -- # major=24 00:05:53.731 19:06:39 version -- app/version.sh@18 -- # get_header_version minor 00:05:53.731 19:06:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:53.731 19:06:39 version -- app/version.sh@14 -- # cut -f2 00:05:53.731 19:06:39 version -- app/version.sh@14 -- # tr -d '"' 00:05:53.731 19:06:39 version -- app/version.sh@18 -- # minor=9 00:05:53.731 19:06:39 version -- app/version.sh@19 -- # get_header_version patch 00:05:53.731 19:06:39 version -- app/version.sh@14 -- # cut -f2 00:05:53.731 19:06:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:53.731 19:06:39 version -- app/version.sh@14 -- # tr -d '"' 00:05:53.731 19:06:39 version -- app/version.sh@19 -- # patch=0 00:05:53.731 19:06:39 version -- app/version.sh@20 -- # get_header_version suffix 00:05:53.731 19:06:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:53.731 19:06:39 version -- app/version.sh@14 -- # cut -f2 00:05:53.731 19:06:39 version -- app/version.sh@14 -- # tr -d '"' 00:05:53.731 19:06:39 version -- app/version.sh@20 -- # suffix=-pre 00:05:53.731 19:06:39 version -- app/version.sh@22 -- # version=24.9 00:05:53.731 19:06:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:53.731 19:06:39 version -- app/version.sh@28 -- # version=24.9rc0 00:05:53.731 19:06:39 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:53.731 19:06:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:53.731 19:06:39 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:53.731 19:06:39 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:53.731 00:05:53.731 real 0m0.183s 00:05:53.732 user 0m0.089s 00:05:53.732 sys 0m0.132s 00:05:53.732 19:06:39 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.732 19:06:39 version -- common/autotest_common.sh@10 -- # set +x 00:05:53.732 ************************************ 00:05:53.732 END TEST version 00:05:53.732 ************************************ 00:05:53.732 19:06:39 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:05:53.732 19:06:39 -- spdk/autotest.sh@202 -- # uname -s 00:05:53.732 19:06:39 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:05:53.732 19:06:39 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:53.732 19:06:39 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:53.732 19:06:39 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:05:53.732 19:06:39 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:53.732 19:06:39 -- spdk/autotest.sh@264 -- # timing_exit lib 00:05:53.732 19:06:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:53.732 19:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:53.732 19:06:39 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:53.732 19:06:39 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:05:53.732 19:06:39 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:05:53.732 19:06:39 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:05:53.732 19:06:39 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:05:53.732 19:06:39 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:05:53.732 19:06:39 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:53.732 19:06:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:53.732 19:06:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.732 19:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:53.732 ************************************ 00:05:53.732 START TEST nvmf_tcp 00:05:53.732 ************************************ 00:05:53.732 19:06:39 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:53.991 * Looking for test storage... 00:05:53.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:53.991 19:06:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:53.991 19:06:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:53.991 19:06:40 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:53.991 19:06:40 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:53.991 19:06:40 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.991 19:06:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.991 ************************************ 00:05:53.991 START TEST nvmf_target_core 00:05:53.991 ************************************ 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:53.991 * Looking for test storage... 00:05:53.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.991 19:06:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.992 19:06:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:54.252 ************************************ 00:05:54.252 START TEST nvmf_abort 00:05:54.252 ************************************ 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:54.252 * Looking for test storage... 00:05:54.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:05:54.252 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:00.823 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:00.823 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:00.823 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:00.824 Found net devices under 0000:af:00.0: cvl_0_0 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:00.824 Found net devices under 0000:af:00.1: cvl_0_1 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:00.824 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:00.824 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:00.824 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:00.824 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:01.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:01.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:06:01.083 00:06:01.083 --- 10.0.0.2 ping statistics --- 00:06:01.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:01.083 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:01.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:01.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:06:01.083 00:06:01.083 --- 10.0.0.1 ping statistics --- 00:06:01.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:01.083 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:01.083 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1355410 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1355410 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1355410 ']' 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.342 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.342 [2024-07-24 19:06:47.400188] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:06:01.342 [2024-07-24 19:06:47.400237] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:01.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.342 [2024-07-24 19:06:47.476000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.342 [2024-07-24 19:06:47.551282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:01.342 [2024-07-24 19:06:47.551318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:01.342 [2024-07-24 19:06:47.551327] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:01.342 [2024-07-24 19:06:47.551335] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:01.342 [2024-07-24 19:06:47.551342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:01.342 [2024-07-24 19:06:47.551451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.342 [2024-07-24 19:06:47.551536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.342 [2024-07-24 19:06:47.551538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.315 [2024-07-24 19:06:48.251273] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.315 Malloc0 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.315 Delay0 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.315 [2024-07-24 19:06:48.335593] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.315 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:02.315 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.315 [2024-07-24 19:06:48.443876] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:04.854 Initializing NVMe Controllers 00:06:04.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:04.854 controller IO queue size 128 less than required 00:06:04.854 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:04.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:04.855 Initialization complete. Launching workers. 00:06:04.855 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41770 00:06:04.855 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41831, failed to submit 62 00:06:04.855 success 41774, unsuccess 57, failed 0 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:04.855 rmmod nvme_tcp 00:06:04.855 rmmod nvme_fabrics 00:06:04.855 rmmod nvme_keyring 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1355410 ']' 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1355410 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1355410 ']' 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1355410 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1355410 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1355410' 00:06:04.855 killing process with pid 1355410 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1355410 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1355410 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:04.855 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.761 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:06.761 00:06:06.761 real 0m12.687s 00:06:06.761 user 0m13.315s 00:06:06.761 sys 0m6.502s 00:06:06.761 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.761 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.761 ************************************ 00:06:06.761 END TEST nvmf_abort 00:06:06.761 ************************************ 00:06:06.761 19:06:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:06.761 19:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:06.761 19:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.761 19:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:07.021 ************************************ 00:06:07.021 START TEST nvmf_ns_hotplug_stress 00:06:07.021 ************************************ 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:07.021 * Looking for test storage... 00:06:07.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.021 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:07.022 19:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:13.594 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:13.595 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:13.595 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:13.595 Found net devices under 0000:af:00.0: cvl_0_0 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:13.595 Found net devices under 0000:af:00.1: cvl_0_1 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:13.595 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:13.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:06:13.595 00:06:13.595 --- 10.0.0.2 ping statistics --- 00:06:13.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.595 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:06:13.596 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:13.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:06:13.596 00:06:13.596 --- 10.0.0.1 ping statistics --- 00:06:13.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.596 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:06:13.596 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.596 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:13.596 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:13.596 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.596 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:13.596 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:13.596 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.596 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:13.596 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1359658 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1359658 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1359658 ']' 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.856 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:13.856 [2024-07-24 19:06:59.920128] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:06:13.856 [2024-07-24 19:06:59.920172] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.856 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.856 [2024-07-24 19:06:59.994393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.856 [2024-07-24 19:07:00.092914] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:13.856 [2024-07-24 19:07:00.092967] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:13.856 [2024-07-24 19:07:00.092983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.856 [2024-07-24 19:07:00.092997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.856 [2024-07-24 19:07:00.093008] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:13.856 [2024-07-24 19:07:00.093129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.856 [2024-07-24 19:07:00.093213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.856 [2024-07-24 19:07:00.093219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.795 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.795 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:14.795 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:14.795 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.795 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.795 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:14.795 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:14.795 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:14.795 [2024-07-24 19:07:00.922301] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.795 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.054 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.313 [2024-07-24 19:07:01.300000] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.313 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:15.313 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:15.572 Malloc0 00:06:15.572 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:15.831 Delay0 00:06:15.831 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.090 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:16.090 NULL1 00:06:16.090 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:16.349 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:16.349 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1360209 00:06:16.349 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:16.349 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.349 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.608 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.608 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:16.608 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:16.867 true 00:06:16.867 19:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:16.867 19:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.126 19:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.384 19:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:17.384 19:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:17.384 true 00:06:17.384 19:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:17.384 19:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.643 19:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.903 19:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:17.903 19:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:17.903 true 00:06:18.161 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:18.161 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.161 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.420 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:18.420 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:18.679 true 00:06:18.679 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:18.679 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.679 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.938 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:18.938 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:19.197 true 00:06:19.197 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:19.197 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.456 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.456 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:19.456 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:19.714 true 00:06:19.714 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:19.714 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.974 19:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.974 19:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:19.974 19:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:20.232 true 00:06:20.232 19:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:20.232 19:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.491 19:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.749 19:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:20.749 19:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:20.749 true 00:06:20.749 19:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:20.749 19:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.008 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.267 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:21.267 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:21.525 true 00:06:21.525 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:21.525 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.525 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.844 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:21.844 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:22.102 true 00:06:22.102 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:22.102 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.102 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.361 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:22.361 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:22.620 true 00:06:22.620 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:22.620 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.879 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.879 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:22.879 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:23.138 true 00:06:23.138 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:23.138 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.397 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.397 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:23.397 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:23.656 true 00:06:23.656 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:23.656 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.914 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.172 19:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:24.172 19:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:24.172 true 00:06:24.172 19:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:24.172 19:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.431 19:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.689 19:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:24.689 19:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:24.689 true 00:06:24.947 19:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:24.947 19:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.947 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.205 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:25.205 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:25.464 true 00:06:25.464 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:25.464 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.464 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.723 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:25.723 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:25.982 true 00:06:25.982 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:25.982 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.242 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.242 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:26.242 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:26.500 true 00:06:26.500 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:26.500 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.758 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.758 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:26.758 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:27.017 true 00:06:27.017 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:27.017 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.275 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.533 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:27.533 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:27.533 true 00:06:27.533 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:27.533 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.792 19:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.051 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:28.051 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:28.309 true 00:06:28.309 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:28.309 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.309 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.568 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:28.568 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:28.826 true 00:06:28.826 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:28.826 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.084 19:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.084 19:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:29.084 19:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:29.342 true 00:06:29.342 19:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:29.342 19:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.600 19:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.600 19:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:29.600 19:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:29.858 true 00:06:29.858 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:29.858 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.117 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.376 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:30.376 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:30.376 true 00:06:30.376 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:30.376 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.634 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.891 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:30.891 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:30.891 true 00:06:31.149 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:31.149 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.149 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.406 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:31.406 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:31.665 true 00:06:31.665 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:31.665 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.665 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.924 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:31.924 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:32.182 true 00:06:32.182 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:32.182 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.440 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.440 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:32.440 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:32.699 true 00:06:32.699 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:32.699 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.958 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.958 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:32.958 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:33.216 true 00:06:33.216 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:33.216 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.475 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.734 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:33.734 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:33.734 true 00:06:33.734 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:33.734 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.993 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.251 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:34.251 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:34.251 true 00:06:34.510 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:34.510 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.510 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.775 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:34.775 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:35.081 true 00:06:35.081 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:35.081 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.081 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.360 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:35.360 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:35.619 true 00:06:35.619 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:35.619 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.619 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.877 19:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:35.877 19:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:36.135 true 00:06:36.135 19:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:36.135 19:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.395 19:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.395 19:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:36.395 19:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:36.654 true 00:06:36.654 19:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:36.654 19:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.913 19:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.172 19:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:37.172 19:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:37.172 true 00:06:37.172 19:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:37.172 19:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.430 19:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.688 19:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:37.688 19:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:37.688 true 00:06:37.688 19:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:37.688 19:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.947 19:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.206 19:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:38.206 19:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:38.466 true 00:06:38.466 19:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:38.466 19:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.466 19:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.725 19:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:38.725 19:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:38.984 true 00:06:38.984 19:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:38.984 19:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.243 19:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.243 19:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:39.243 19:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:39.502 true 00:06:39.502 19:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:39.502 19:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.761 19:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.761 19:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:39.762 19:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:40.021 true 00:06:40.021 19:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:40.021 19:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.279 19:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.538 19:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:40.538 19:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:40.538 true 00:06:40.538 19:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:40.538 19:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.797 19:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.056 19:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:41.056 19:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:41.056 true 00:06:41.315 19:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:41.316 19:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.316 19:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.575 19:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:41.575 19:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:41.833 true 00:06:41.833 19:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:41.833 19:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.833 19:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.092 19:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:42.092 19:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:42.351 true 00:06:42.351 19:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:42.351 19:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.610 19:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.610 19:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:42.610 19:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:42.869 true 00:06:42.869 19:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:42.869 19:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.128 19:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.128 19:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:43.128 19:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:43.387 true 00:06:43.387 19:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:43.387 19:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.646 19:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.905 19:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:43.905 19:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:43.905 true 00:06:43.905 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:43.905 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.163 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.422 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:44.422 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:44.681 true 00:06:44.681 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:44.681 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.681 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.940 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:44.941 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:45.199 true 00:06:45.199 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:45.199 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.459 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.459 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:45.459 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:45.718 true 00:06:45.718 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:45.718 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.977 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.237 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:46.237 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:46.237 true 00:06:46.237 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:46.237 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.495 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.495 Initializing NVMe Controllers 00:06:46.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:46.495 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:06:46.495 Controller IO queue size 128, less than required. 00:06:46.495 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:46.495 WARNING: Some requested NVMe devices were skipped 00:06:46.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:46.495 Initialization complete. Launching workers. 00:06:46.495 ======================================================== 00:06:46.495 Latency(us) 00:06:46.495 Device Information : IOPS MiB/s Average min max 00:06:46.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 28495.43 13.91 4491.96 1894.92 7946.22 00:06:46.495 ======================================================== 00:06:46.495 Total : 28495.43 13.91 4491.96 1894.92 7946.22 00:06:46.495 00:06:46.755 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:46.755 19:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:47.014 true 00:06:47.014 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360209 00:06:47.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1360209) - No such process 00:06:47.014 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1360209 00:06:47.014 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.014 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.283 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:47.283 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:47.283 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:47.283 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.283 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:47.542 null0 00:06:47.542 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.542 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.542 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:47.542 null1 00:06:47.542 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.542 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.542 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:47.802 null2 00:06:47.802 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.802 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.802 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:48.061 null3 00:06:48.061 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.061 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.061 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:48.061 null4 00:06:48.061 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.061 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.061 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:48.320 null5 00:06:48.320 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.320 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.320 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:48.581 null6 00:06:48.581 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.581 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.582 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:48.582 null7 00:06:48.582 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.582 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.582 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1365915 1365917 1365918 1365920 1365923 1365924 1365927 1365928 00:06:48.873 19:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.873 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:48.873 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:48.873 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:48.873 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:48.873 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:48.873 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:48.873 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.132 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.132 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.132 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.132 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.132 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.132 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.133 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.392 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.393 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.652 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.652 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.652 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.652 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.652 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.652 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.652 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.652 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.912 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.912 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.912 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.912 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.172 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.432 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.432 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.432 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.432 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.432 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.432 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.432 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.432 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.691 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.691 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.691 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.691 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.691 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.691 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.691 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.691 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.692 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.951 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.211 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.470 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.470 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.470 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.470 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.471 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.730 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.730 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.731 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.989 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.989 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.989 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.989 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.989 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.990 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.248 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.248 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.248 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.248 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.248 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.248 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.248 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.248 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:52.508 rmmod nvme_tcp 00:06:52.508 rmmod nvme_fabrics 00:06:52.508 rmmod nvme_keyring 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1359658 ']' 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1359658 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1359658 ']' 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1359658 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1359658 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1359658' 00:06:52.508 killing process with pid 1359658 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1359658 00:06:52.508 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1359658 00:06:52.767 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:52.767 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:52.767 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:52.767 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:52.768 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:52.768 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.768 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.768 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.305 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:55.305 00:06:55.305 real 0m47.908s 00:06:55.305 user 3m12.661s 00:06:55.305 sys 0m22.829s 00:06:55.305 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.305 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.305 ************************************ 00:06:55.305 END TEST nvmf_ns_hotplug_stress 00:06:55.305 ************************************ 00:06:55.305 19:07:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:55.305 19:07:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:55.305 19:07:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.305 19:07:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:55.305 ************************************ 00:06:55.305 START TEST nvmf_delete_subsystem 00:06:55.305 ************************************ 00:06:55.305 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:55.305 * Looking for test storage... 00:06:55.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:55.306 19:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:01.876 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:01.876 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:01.876 Found net devices under 0000:af:00.0: cvl_0_0 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:01.876 Found net devices under 0000:af:00.1: cvl_0_1 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:01.876 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.877 19:07:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:01.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:07:01.877 00:07:01.877 --- 10.0.0.2 ping statistics --- 00:07:01.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.877 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:07:01.877 00:07:01.877 --- 10.0.0.1 ping statistics --- 00:07:01.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.877 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1370295 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1370295 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1370295 ']' 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.877 19:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:01.877 [2024-07-24 19:07:47.329336] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:07:01.877 [2024-07-24 19:07:47.329382] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.877 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.877 [2024-07-24 19:07:47.402863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.877 [2024-07-24 19:07:47.475246] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.877 [2024-07-24 19:07:47.475285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.877 [2024-07-24 19:07:47.475295] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.877 [2024-07-24 19:07:47.475303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.877 [2024-07-24 19:07:47.475310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.877 [2024-07-24 19:07:47.475351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.877 [2024-07-24 19:07:47.475353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.136 [2024-07-24 19:07:48.162659] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.136 [2024-07-24 19:07:48.182835] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.136 NULL1 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.136 Delay0 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1370369 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:02.136 19:07:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:02.136 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.136 [2024-07-24 19:07:48.274442] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:04.040 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:04.040 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.040 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 [2024-07-24 19:07:50.444729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2304710 is same with the state(5) to be set 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 starting I/O failed: -6 00:07:04.299 [2024-07-24 19:07:50.445116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1ae800d660 is same with the state(5) to be set 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.299 Write completed with error (sct=0, sc=8) 00:07:04.299 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Write completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:04.300 Read completed with error (sct=0, sc=8) 00:07:05.237 [2024-07-24 19:07:51.410222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e4450 is same with the state(5) to be set 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 [2024-07-24 19:07:51.446400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1ae800d330 is same with the state(5) to be set 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 [2024-07-24 19:07:51.446713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2304a40 is same with the state(5) to be set 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 [2024-07-24 19:07:51.447175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e3910 is same with the state(5) to be set 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Read completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.237 Write completed with error (sct=0, sc=8) 00:07:05.238 Read completed with error (sct=0, sc=8) 00:07:05.238 Write completed with error (sct=0, sc=8) 00:07:05.238 Read completed with error (sct=0, sc=8) 00:07:05.238 Write completed with error (sct=0, sc=8) 00:07:05.238 [2024-07-24 19:07:51.447360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e3af0 is same with the state(5) to be set 00:07:05.238 Initializing NVMe Controllers 00:07:05.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:05.238 Controller IO queue size 128, less than required. 00:07:05.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:05.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:05.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:05.238 Initialization complete. Launching workers. 00:07:05.238 ======================================================== 00:07:05.238 Latency(us) 00:07:05.238 Device Information : IOPS MiB/s Average min max 00:07:05.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.81 0.09 959992.42 1347.35 1011532.30 00:07:05.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.99 0.07 891305.85 260.96 1011636.65 00:07:05.238 ======================================================== 00:07:05.238 Total : 329.80 0.16 928338.67 260.96 1011636.65 00:07:05.238 00:07:05.238 [2024-07-24 19:07:51.448073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e4450 (9): Bad file descriptor 00:07:05.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:05.238 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.238 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:05.238 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1370369 00:07:05.238 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1370369 00:07:05.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1370369) - No such process 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1370369 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1370369 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1370369 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.805 [2024-07-24 19:07:51.984361] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1371116 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371116 00:07:05.805 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.805 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.064 [2024-07-24 19:07:52.057958] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:06.322 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.322 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371116 00:07:06.322 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.889 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.889 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371116 00:07:06.889 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.456 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.456 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371116 00:07:07.456 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.022 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.022 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371116 00:07:08.022 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.655 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.655 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371116 00:07:08.655 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.914 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.914 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371116 00:07:08.915 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.915 Initializing NVMe Controllers 00:07:08.915 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.915 Controller IO queue size 128, less than required. 00:07:08.915 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:08.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:08.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:08.915 Initialization complete. Launching workers. 00:07:08.915 ======================================================== 00:07:08.915 Latency(us) 00:07:08.915 Device Information : IOPS MiB/s Average min max 00:07:08.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003284.31 1000182.80 1009674.45 00:07:08.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005403.63 1000180.16 1010376.31 00:07:08.915 ======================================================== 00:07:08.915 Total : 256.00 0.12 1004343.97 1000180.16 1010376.31 00:07:08.915 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371116 00:07:09.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1371116) - No such process 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1371116 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:09.482 rmmod nvme_tcp 00:07:09.482 rmmod nvme_fabrics 00:07:09.482 rmmod nvme_keyring 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1370295 ']' 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1370295 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1370295 ']' 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1370295 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1370295 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1370295' 00:07:09.482 killing process with pid 1370295 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1370295 00:07:09.482 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1370295 00:07:09.741 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:09.741 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:09.741 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:09.741 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:09.741 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:09.741 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.741 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.741 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.279 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:12.279 00:07:12.279 real 0m16.899s 00:07:12.279 user 0m29.644s 00:07:12.279 sys 0m6.422s 00:07:12.279 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.279 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.279 ************************************ 00:07:12.279 END TEST nvmf_delete_subsystem 00:07:12.279 ************************************ 00:07:12.279 19:07:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:12.279 19:07:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:12.279 19:07:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.279 19:07:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.279 ************************************ 00:07:12.279 START TEST nvmf_host_management 00:07:12.279 ************************************ 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:12.279 * Looking for test storage... 00:07:12.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:12.279 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:12.280 19:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:18.853 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:18.853 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:18.853 Found net devices under 0000:af:00.0: cvl_0_0 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:18.853 Found net devices under 0000:af:00.1: cvl_0_1 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:18.853 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.854 19:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:18.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:07:18.854 00:07:18.854 --- 10.0.0.2 ping statistics --- 00:07:18.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.854 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:07:18.854 00:07:18.854 --- 10.0.0.1 ping statistics --- 00:07:18.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.854 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1375353 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1375353 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1375353 ']' 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.854 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.113 [2024-07-24 19:08:05.130518] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:07:19.113 [2024-07-24 19:08:05.130573] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.113 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.113 [2024-07-24 19:08:05.206113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.113 [2024-07-24 19:08:05.281334] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.113 [2024-07-24 19:08:05.281375] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.113 [2024-07-24 19:08:05.281384] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.113 [2024-07-24 19:08:05.281392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.113 [2024-07-24 19:08:05.281399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.113 [2024-07-24 19:08:05.281498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.113 [2024-07-24 19:08:05.281581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.113 [2024-07-24 19:08:05.281692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.113 [2024-07-24 19:08:05.281693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.052 [2024-07-24 19:08:05.975023] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.052 19:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.052 Malloc0 00:07:20.052 [2024-07-24 19:08:06.037550] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1375654 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1375654 /var/tmp/bdevperf.sock 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1375654 ']' 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:20.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:20.052 { 00:07:20.052 "params": { 00:07:20.052 "name": "Nvme$subsystem", 00:07:20.052 "trtype": "$TEST_TRANSPORT", 00:07:20.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:20.052 "adrfam": "ipv4", 00:07:20.052 "trsvcid": "$NVMF_PORT", 00:07:20.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:20.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:20.052 "hdgst": ${hdgst:-false}, 00:07:20.052 "ddgst": ${ddgst:-false} 00:07:20.052 }, 00:07:20.052 "method": "bdev_nvme_attach_controller" 00:07:20.052 } 00:07:20.052 EOF 00:07:20.052 )") 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:20.052 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:20.052 "params": { 00:07:20.052 "name": "Nvme0", 00:07:20.052 "trtype": "tcp", 00:07:20.052 "traddr": "10.0.0.2", 00:07:20.052 "adrfam": "ipv4", 00:07:20.052 "trsvcid": "4420", 00:07:20.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:20.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:20.052 "hdgst": false, 00:07:20.052 "ddgst": false 00:07:20.052 }, 00:07:20.052 "method": "bdev_nvme_attach_controller" 00:07:20.052 }' 00:07:20.052 [2024-07-24 19:08:06.144091] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:07:20.052 [2024-07-24 19:08:06.144143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375654 ] 00:07:20.052 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.052 [2024-07-24 19:08:06.215093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.052 [2024-07-24 19:08:06.283563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.621 Running I/O for 10 seconds... 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.881 19:08:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.881 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=577 00:07:20.881 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 577 -ge 100 ']' 00:07:20.881 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:20.881 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:20.881 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:20.881 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:20.881 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.881 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.881 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.881 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:20.881 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.881 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.881 [2024-07-24 19:08:07.023382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.881 [2024-07-24 19:08:07.023425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.881 [2024-07-24 19:08:07.023443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.881 [2024-07-24 19:08:07.023456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.881 [2024-07-24 19:08:07.023470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.881 [2024-07-24 19:08:07.023482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.881 [2024-07-24 19:08:07.023494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.881 [2024-07-24 19:08:07.023507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.881 [2024-07-24 19:08:07.023520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cca70 is same with the state(5) to be set 00:07:20.881 [2024-07-24 19:08:07.024262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.881 [2024-07-24 19:08:07.024289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.881 [2024-07-24 19:08:07.024311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.881 [2024-07-24 19:08:07.024325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.881 [2024-07-24 19:08:07.024341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.881 [2024-07-24 19:08:07.024353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.881 [2024-07-24 19:08:07.024368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.881 [2024-07-24 19:08:07.024381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.881 [2024-07-24 19:08:07.024395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.881 [2024-07-24 19:08:07.024410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.881 [2024-07-24 19:08:07.024424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.881 [2024-07-24 19:08:07.024437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.881 [2024-07-24 19:08:07.024453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.881 [2024-07-24 19:08:07.024466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.881 [2024-07-24 19:08:07.024481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.881 [2024-07-24 19:08:07.024494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.881 [2024-07-24 19:08:07.024508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.024981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.024996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.882 [2024-07-24 19:08:07.025522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.882 [2024-07-24 19:08:07.025536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.025982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.025996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.026009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.026024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.026038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.026053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.026066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.026081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.883 [2024-07-24 19:08:07.026094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:20.883 [2024-07-24 19:08:07.026171] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18dda30 was disconnected and freed. reset controller. 00:07:20.883 [2024-07-24 19:08:07.027147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:20.883 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:20.883 00:07:20.883 Latency(us) 00:07:20.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.883 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:20.883 Job: Nvme0n1 ended in about 0.42 seconds with error 00:07:20.883 Verification LBA range: start 0x0 length 0x400 00:07:20.883 Nvme0n1 : 0.42 1515.00 94.69 151.50 0.00 37503.05 2254.44 36280.73 00:07:20.883 =================================================================================================================== 00:07:20.883 Total : 1515.00 94.69 151.50 0.00 37503.05 2254.44 36280.73 00:07:20.883 [2024-07-24 19:08:07.028771] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.883 [2024-07-24 19:08:07.028793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cca70 (9): Bad file descriptor 00:07:20.883 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.883 19:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:20.883 [2024-07-24 19:08:07.074285] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1375654 00:07:21.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1375654) - No such process 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:21.818 { 00:07:21.818 "params": { 00:07:21.818 "name": "Nvme$subsystem", 00:07:21.818 "trtype": "$TEST_TRANSPORT", 00:07:21.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:21.818 "adrfam": "ipv4", 00:07:21.818 "trsvcid": "$NVMF_PORT", 00:07:21.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:21.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:21.818 "hdgst": ${hdgst:-false}, 00:07:21.818 "ddgst": ${ddgst:-false} 00:07:21.818 }, 00:07:21.818 "method": "bdev_nvme_attach_controller" 00:07:21.818 } 00:07:21.818 EOF 00:07:21.818 )") 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:21.818 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:21.818 "params": { 00:07:21.818 "name": "Nvme0", 00:07:21.818 "trtype": "tcp", 00:07:21.818 "traddr": "10.0.0.2", 00:07:21.818 "adrfam": "ipv4", 00:07:21.818 "trsvcid": "4420", 00:07:21.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:21.818 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:21.818 "hdgst": false, 00:07:21.818 "ddgst": false 00:07:21.818 }, 00:07:21.818 "method": "bdev_nvme_attach_controller" 00:07:21.818 }' 00:07:22.077 [2024-07-24 19:08:08.087661] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:07:22.077 [2024-07-24 19:08:08.087713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375936 ] 00:07:22.077 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.077 [2024-07-24 19:08:08.158276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.077 [2024-07-24 19:08:08.224089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.335 Running I/O for 1 seconds... 00:07:23.713 00:07:23.713 Latency(us) 00:07:23.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.713 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:23.713 Verification LBA range: start 0x0 length 0x400 00:07:23.713 Nvme0n1 : 1.03 1554.17 97.14 0.00 0.00 40635.14 8441.04 38377.88 00:07:23.713 =================================================================================================================== 00:07:23.713 Total : 1554.17 97.14 0.00 0.00 40635.14 8441.04 38377.88 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:23.713 rmmod nvme_tcp 00:07:23.713 rmmod nvme_fabrics 00:07:23.713 rmmod nvme_keyring 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1375353 ']' 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1375353 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1375353 ']' 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1375353 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1375353 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1375353' 00:07:23.713 killing process with pid 1375353 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1375353 00:07:23.713 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1375353 00:07:23.972 [2024-07-24 19:08:10.065583] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:23.972 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:23.972 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:23.972 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:23.972 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:23.972 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:23.972 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.972 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.972 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.508 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:26.508 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:26.508 00:07:26.508 real 0m14.168s 00:07:26.508 user 0m23.558s 00:07:26.508 sys 0m6.571s 00:07:26.508 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.508 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.508 ************************************ 00:07:26.508 END TEST nvmf_host_management 00:07:26.508 ************************************ 00:07:26.508 19:08:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:26.508 19:08:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.508 19:08:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.508 19:08:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.508 ************************************ 00:07:26.508 START TEST nvmf_lvol 00:07:26.508 ************************************ 00:07:26.508 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:26.508 * Looking for test storage... 00:07:26.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.508 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.508 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:26.509 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:33.085 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:33.085 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.085 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:33.086 Found net devices under 0000:af:00.0: cvl_0_0 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:33.086 Found net devices under 0000:af:00.1: cvl_0_1 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.086 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:33.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:07:33.086 00:07:33.086 --- 10.0.0.2 ping statistics --- 00:07:33.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.086 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:07:33.086 00:07:33.086 --- 10.0.0.1 ping statistics --- 00:07:33.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.086 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1379951 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1379951 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1379951 ']' 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.086 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.086 [2024-07-24 19:08:19.316731] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:07:33.086 [2024-07-24 19:08:19.316784] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.400 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.400 [2024-07-24 19:08:19.391644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.400 [2024-07-24 19:08:19.465619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.400 [2024-07-24 19:08:19.465658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.400 [2024-07-24 19:08:19.465673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.400 [2024-07-24 19:08:19.465684] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.400 [2024-07-24 19:08:19.465698] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.400 [2024-07-24 19:08:19.465768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.400 [2024-07-24 19:08:19.465864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.400 [2024-07-24 19:08:19.465867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.969 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.969 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:33.969 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:33.969 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.969 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.969 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.969 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:34.227 [2024-07-24 19:08:20.322611] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.227 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:34.486 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:34.486 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:34.745 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:34.745 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:34.745 19:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:35.004 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=af76c050-3a93-49be-a63d-74947b9a93b6 00:07:35.004 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u af76c050-3a93-49be-a63d-74947b9a93b6 lvol 20 00:07:35.263 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a3e1c99a-11e2-4b45-a253-bb9aa6ee8e0b 00:07:35.263 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:35.263 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a3e1c99a-11e2-4b45-a253-bb9aa6ee8e0b 00:07:35.521 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:35.780 [2024-07-24 19:08:21.828852] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.780 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.037 19:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1380455 00:07:36.037 19:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:36.037 19:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:36.037 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.973 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a3e1c99a-11e2-4b45-a253-bb9aa6ee8e0b MY_SNAPSHOT 00:07:37.231 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=82a0fc6b-94d6-4f89-830d-6e339b2a90eb 00:07:37.231 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a3e1c99a-11e2-4b45-a253-bb9aa6ee8e0b 30 00:07:37.490 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 82a0fc6b-94d6-4f89-830d-6e339b2a90eb MY_CLONE 00:07:37.490 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a076b4c5-03c8-4323-9d59-aa930e6e159e 00:07:37.490 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a076b4c5-03c8-4323-9d59-aa930e6e159e 00:07:38.057 19:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1380455 00:07:46.174 Initializing NVMe Controllers 00:07:46.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:46.174 Controller IO queue size 128, less than required. 00:07:46.174 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:46.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:46.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:46.174 Initialization complete. Launching workers. 00:07:46.174 ======================================================== 00:07:46.174 Latency(us) 00:07:46.174 Device Information : IOPS MiB/s Average min max 00:07:46.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12505.70 48.85 10238.01 1793.71 49346.86 00:07:46.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12336.40 48.19 10378.64 3673.51 52498.47 00:07:46.174 ======================================================== 00:07:46.174 Total : 24842.10 97.04 10307.85 1793.71 52498.47 00:07:46.174 00:07:46.174 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:46.433 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a3e1c99a-11e2-4b45-a253-bb9aa6ee8e0b 00:07:46.692 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af76c050-3a93-49be-a63d-74947b9a93b6 00:07:46.951 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:46.951 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:46.951 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:46.951 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:46.951 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:46.951 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:46.951 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:46.951 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.951 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:46.951 rmmod nvme_tcp 00:07:46.951 rmmod nvme_fabrics 00:07:46.951 rmmod nvme_keyring 00:07:46.951 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.951 19:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1379951 ']' 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1379951 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1379951 ']' 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1379951 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1379951 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1379951' 00:07:46.951 killing process with pid 1379951 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1379951 00:07:46.951 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1379951 00:07:47.210 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:47.210 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:47.210 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:47.210 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.210 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:47.210 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.210 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.210 19:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:49.748 00:07:49.748 real 0m23.110s 00:07:49.748 user 1m2.479s 00:07:49.748 sys 0m9.869s 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:49.748 ************************************ 00:07:49.748 END TEST nvmf_lvol 00:07:49.748 ************************************ 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:49.748 ************************************ 00:07:49.748 START TEST nvmf_lvs_grow 00:07:49.748 ************************************ 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:49.748 * Looking for test storage... 00:07:49.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.748 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:49.749 19:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.343 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:56.344 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:56.344 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:56.344 Found net devices under 0000:af:00.0: cvl_0_0 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:56.344 Found net devices under 0000:af:00.1: cvl_0_1 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:56.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:07:56.344 00:07:56.344 --- 10.0.0.2 ping statistics --- 00:07:56.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.344 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:07:56.344 00:07:56.344 --- 10.0.0.1 ping statistics --- 00:07:56.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.344 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1386071 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1386071 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1386071 ']' 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.344 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.345 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:56.345 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:56.604 [2024-07-24 19:08:42.621493] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:07:56.604 [2024-07-24 19:08:42.621541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.604 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.604 [2024-07-24 19:08:42.695409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.604 [2024-07-24 19:08:42.768198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.604 [2024-07-24 19:08:42.768237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.604 [2024-07-24 19:08:42.768251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.604 [2024-07-24 19:08:42.768263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.604 [2024-07-24 19:08:42.768272] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.604 [2024-07-24 19:08:42.768299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.173 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.173 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:57.173 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:57.173 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:57.173 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:57.432 [2024-07-24 19:08:43.600033] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:57.432 ************************************ 00:07:57.432 START TEST lvs_grow_clean 00:07:57.432 ************************************ 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:57.432 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:57.722 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:57.722 19:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:57.981 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d90f4f4b-a71d-4fa3-9793-8f912e9dd958 00:07:57.981 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f4f4b-a71d-4fa3-9793-8f912e9dd958 00:07:57.981 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:57.981 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:57.981 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:57.981 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d90f4f4b-a71d-4fa3-9793-8f912e9dd958 lvol 150 00:07:58.240 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0d963702-63bd-4f68-9b54-b1296903b6cb 00:07:58.240 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:58.240 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:58.499 [2024-07-24 19:08:44.505319] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:58.499 [2024-07-24 19:08:44.505372] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:58.499 true 00:07:58.499 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f4f4b-a71d-4fa3-9793-8f912e9dd958 00:07:58.499 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:58.499 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:58.499 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:58.758 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0d963702-63bd-4f68-9b54-b1296903b6cb 00:07:59.017 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:59.017 [2024-07-24 19:08:45.171282] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.017 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:59.276 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1386571 00:07:59.276 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:59.277 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:59.277 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1386571 /var/tmp/bdevperf.sock 00:07:59.277 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1386571 ']' 00:07:59.277 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:59.277 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.277 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:59.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:59.277 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.277 19:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:59.277 [2024-07-24 19:08:45.385329] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:07:59.277 [2024-07-24 19:08:45.385378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386571 ] 00:07:59.277 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.277 [2024-07-24 19:08:45.454743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.536 [2024-07-24 19:08:45.524130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.105 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.105 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:00.105 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:00.365 Nvme0n1 00:08:00.365 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:00.624 [ 00:08:00.624 { 00:08:00.624 "name": "Nvme0n1", 00:08:00.624 "aliases": [ 00:08:00.624 "0d963702-63bd-4f68-9b54-b1296903b6cb" 00:08:00.624 ], 00:08:00.624 "product_name": "NVMe disk", 00:08:00.624 "block_size": 4096, 00:08:00.624 "num_blocks": 38912, 00:08:00.624 "uuid": "0d963702-63bd-4f68-9b54-b1296903b6cb", 00:08:00.624 "assigned_rate_limits": { 00:08:00.624 "rw_ios_per_sec": 0, 00:08:00.624 "rw_mbytes_per_sec": 0, 00:08:00.624 "r_mbytes_per_sec": 0, 00:08:00.624 "w_mbytes_per_sec": 0 00:08:00.624 }, 00:08:00.624 "claimed": false, 00:08:00.624 "zoned": false, 00:08:00.624 "supported_io_types": { 00:08:00.624 "read": true, 00:08:00.624 "write": true, 00:08:00.624 "unmap": true, 00:08:00.624 "flush": true, 00:08:00.624 "reset": true, 00:08:00.624 "nvme_admin": true, 00:08:00.624 "nvme_io": true, 00:08:00.624 "nvme_io_md": false, 00:08:00.624 "write_zeroes": true, 00:08:00.624 "zcopy": false, 00:08:00.624 "get_zone_info": false, 00:08:00.624 "zone_management": false, 00:08:00.624 "zone_append": false, 00:08:00.624 "compare": true, 00:08:00.624 "compare_and_write": true, 00:08:00.625 "abort": true, 00:08:00.625 "seek_hole": false, 00:08:00.625 "seek_data": false, 00:08:00.625 "copy": true, 00:08:00.625 "nvme_iov_md": false 00:08:00.625 }, 00:08:00.625 "memory_domains": [ 00:08:00.625 { 00:08:00.625 "dma_device_id": "system", 00:08:00.625 "dma_device_type": 1 00:08:00.625 } 00:08:00.625 ], 00:08:00.625 "driver_specific": { 00:08:00.625 "nvme": [ 00:08:00.625 { 00:08:00.625 "trid": { 00:08:00.625 "trtype": "TCP", 00:08:00.625 "adrfam": "IPv4", 00:08:00.625 "traddr": "10.0.0.2", 00:08:00.625 "trsvcid": "4420", 00:08:00.625 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:00.625 }, 00:08:00.625 "ctrlr_data": { 00:08:00.625 "cntlid": 1, 00:08:00.625 "vendor_id": "0x8086", 00:08:00.625 "model_number": "SPDK bdev Controller", 00:08:00.625 "serial_number": "SPDK0", 00:08:00.625 "firmware_revision": "24.09", 00:08:00.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:00.625 "oacs": { 00:08:00.625 "security": 0, 00:08:00.625 "format": 0, 00:08:00.625 "firmware": 0, 00:08:00.625 "ns_manage": 0 00:08:00.625 }, 00:08:00.625 "multi_ctrlr": true, 00:08:00.625 "ana_reporting": false 00:08:00.625 }, 00:08:00.625 "vs": { 00:08:00.625 "nvme_version": "1.3" 00:08:00.625 }, 00:08:00.625 "ns_data": { 00:08:00.625 "id": 1, 00:08:00.625 "can_share": true 00:08:00.625 } 00:08:00.625 } 00:08:00.625 ], 00:08:00.625 "mp_policy": "active_passive" 00:08:00.625 } 00:08:00.625 } 00:08:00.625 ] 00:08:00.625 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1386838 00:08:00.625 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:00.625 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:00.625 Running I/O for 10 seconds... 00:08:01.563 Latency(us) 00:08:01.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.563 Nvme0n1 : 1.00 23835.00 93.11 0.00 0.00 0.00 0.00 0.00 00:08:01.563 =================================================================================================================== 00:08:01.563 Total : 23835.00 93.11 0.00 0.00 0.00 0.00 0.00 00:08:01.563 00:08:02.499 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d90f4f4b-a71d-4fa3-9793-8f912e9dd958 00:08:02.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.758 Nvme0n1 : 2.00 24013.50 93.80 0.00 0.00 0.00 0.00 0.00 00:08:02.758 =================================================================================================================== 00:08:02.758 Total : 24013.50 93.80 0.00 0.00 0.00 0.00 0.00 00:08:02.758 00:08:02.758 true 00:08:02.758 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f4f4b-a71d-4fa3-9793-8f912e9dd958 00:08:02.758 19:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:03.017 19:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:03.017 19:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:03.017 19:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1386838 00:08:03.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.584 Nvme0n1 : 3.00 24079.67 94.06 0.00 0.00 0.00 0.00 0.00 00:08:03.584 =================================================================================================================== 00:08:03.584 Total : 24079.67 94.06 0.00 0.00 0.00 0.00 0.00 00:08:03.584 00:08:04.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.960 Nvme0n1 : 4.00 24044.00 93.92 0.00 0.00 0.00 0.00 0.00 00:08:04.960 =================================================================================================================== 00:08:04.960 Total : 24044.00 93.92 0.00 0.00 0.00 0.00 0.00 00:08:04.960 00:08:05.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.897 Nvme0n1 : 5.00 24111.80 94.19 0.00 0.00 0.00 0.00 0.00 00:08:05.897 =================================================================================================================== 00:08:05.897 Total : 24111.80 94.19 0.00 0.00 0.00 0.00 0.00 00:08:05.897 00:08:06.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.834 Nvme0n1 : 6.00 24135.83 94.28 0.00 0.00 0.00 0.00 0.00 00:08:06.834 =================================================================================================================== 00:08:06.834 Total : 24135.83 94.28 0.00 0.00 0.00 0.00 0.00 00:08:06.834 00:08:07.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.770 Nvme0n1 : 7.00 24171.29 94.42 0.00 0.00 0.00 0.00 0.00 00:08:07.770 =================================================================================================================== 00:08:07.770 Total : 24171.29 94.42 0.00 0.00 0.00 0.00 0.00 00:08:07.770 00:08:08.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.707 Nvme0n1 : 8.00 24198.00 94.52 0.00 0.00 0.00 0.00 0.00 00:08:08.707 =================================================================================================================== 00:08:08.707 Total : 24198.00 94.52 0.00 0.00 0.00 0.00 0.00 00:08:08.707 00:08:09.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.642 Nvme0n1 : 9.00 24218.56 94.60 0.00 0.00 0.00 0.00 0.00 00:08:09.642 =================================================================================================================== 00:08:09.642 Total : 24218.56 94.60 0.00 0.00 0.00 0.00 0.00 00:08:09.642 00:08:10.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.578 Nvme0n1 : 10.00 24235.10 94.67 0.00 0.00 0.00 0.00 0.00 00:08:10.578 =================================================================================================================== 00:08:10.578 Total : 24235.10 94.67 0.00 0.00 0.00 0.00 0.00 00:08:10.578 00:08:10.578 00:08:10.578 Latency(us) 00:08:10.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.578 Nvme0n1 : 10.00 24237.41 94.68 0.00 0.00 5277.75 3224.37 13159.63 00:08:10.578 =================================================================================================================== 00:08:10.578 Total : 24237.41 94.68 0.00 0.00 5277.75 3224.37 13159.63 00:08:10.578 0 00:08:10.837 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1386571 00:08:10.837 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1386571 ']' 00:08:10.837 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1386571 00:08:10.837 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:10.837 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.837 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1386571 00:08:10.837 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:10.837 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:10.837 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1386571' 00:08:10.837 killing process with pid 1386571 00:08:10.837 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1386571 00:08:10.837 Received shutdown signal, test time was about 10.000000 seconds 00:08:10.837 00:08:10.837 Latency(us) 00:08:10.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.837 =================================================================================================================== 00:08:10.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:10.837 19:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1386571 00:08:10.837 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.096 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:11.355 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f4f4b-a71d-4fa3-9793-8f912e9dd958 00:08:11.356 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:11.356 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:11.356 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:11.356 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:11.615 [2024-07-24 19:08:57.730247] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:11.615 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f4f4b-a71d-4fa3-9793-8f912e9dd958 00:08:11.615 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:11.615 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f4f4b-a71d-4fa3-9793-8f912e9dd958 00:08:11.615 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.615 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.615 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.615 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.615 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.615 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.615 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.615 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:11.615 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f4f4b-a71d-4fa3-9793-8f912e9dd958 00:08:11.874 request: 00:08:11.874 { 00:08:11.874 "uuid": "d90f4f4b-a71d-4fa3-9793-8f912e9dd958", 00:08:11.874 "method": "bdev_lvol_get_lvstores", 00:08:11.874 "req_id": 1 00:08:11.874 } 00:08:11.874 Got JSON-RPC error response 00:08:11.874 response: 00:08:11.874 { 00:08:11.874 "code": -19, 00:08:11.874 "message": "No such device" 00:08:11.874 } 00:08:11.874 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:11.874 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.874 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:11.874 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.874 19:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.874 aio_bdev 00:08:12.133 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0d963702-63bd-4f68-9b54-b1296903b6cb 00:08:12.133 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=0d963702-63bd-4f68-9b54-b1296903b6cb 00:08:12.133 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.133 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:12.133 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.133 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.133 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:12.133 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0d963702-63bd-4f68-9b54-b1296903b6cb -t 2000 00:08:12.393 [ 00:08:12.393 { 00:08:12.393 "name": "0d963702-63bd-4f68-9b54-b1296903b6cb", 00:08:12.393 "aliases": [ 00:08:12.393 "lvs/lvol" 00:08:12.393 ], 00:08:12.393 "product_name": "Logical Volume", 00:08:12.393 "block_size": 4096, 00:08:12.393 "num_blocks": 38912, 00:08:12.393 "uuid": "0d963702-63bd-4f68-9b54-b1296903b6cb", 00:08:12.393 "assigned_rate_limits": { 00:08:12.393 "rw_ios_per_sec": 0, 00:08:12.393 "rw_mbytes_per_sec": 0, 00:08:12.393 "r_mbytes_per_sec": 0, 00:08:12.393 "w_mbytes_per_sec": 0 00:08:12.393 }, 00:08:12.393 "claimed": false, 00:08:12.393 "zoned": false, 00:08:12.393 "supported_io_types": { 00:08:12.393 "read": true, 00:08:12.393 "write": true, 00:08:12.393 "unmap": true, 00:08:12.393 "flush": false, 00:08:12.393 "reset": true, 00:08:12.393 "nvme_admin": false, 00:08:12.393 "nvme_io": false, 00:08:12.393 "nvme_io_md": false, 00:08:12.393 "write_zeroes": true, 00:08:12.393 "zcopy": false, 00:08:12.393 "get_zone_info": false, 00:08:12.393 "zone_management": false, 00:08:12.393 "zone_append": false, 00:08:12.393 "compare": false, 00:08:12.393 "compare_and_write": false, 00:08:12.393 "abort": false, 00:08:12.393 "seek_hole": true, 00:08:12.393 "seek_data": true, 00:08:12.393 "copy": false, 00:08:12.393 "nvme_iov_md": false 00:08:12.393 }, 00:08:12.393 "driver_specific": { 00:08:12.393 "lvol": { 00:08:12.393 "lvol_store_uuid": "d90f4f4b-a71d-4fa3-9793-8f912e9dd958", 00:08:12.393 "base_bdev": "aio_bdev", 00:08:12.393 "thin_provision": false, 00:08:12.393 "num_allocated_clusters": 38, 00:08:12.393 "snapshot": false, 00:08:12.393 "clone": false, 00:08:12.393 "esnap_clone": false 00:08:12.393 } 00:08:12.393 } 00:08:12.393 } 00:08:12.393 ] 00:08:12.393 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:12.393 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:12.393 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f4f4b-a71d-4fa3-9793-8f912e9dd958 00:08:12.652 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:12.652 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f4f4b-a71d-4fa3-9793-8f912e9dd958 00:08:12.652 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:12.652 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:12.652 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0d963702-63bd-4f68-9b54-b1296903b6cb 00:08:12.911 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d90f4f4b-a71d-4fa3-9793-8f912e9dd958 00:08:12.911 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:13.170 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.170 00:08:13.170 real 0m15.694s 00:08:13.170 user 0m14.726s 00:08:13.170 sys 0m2.033s 00:08:13.170 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.170 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:13.170 ************************************ 00:08:13.170 END TEST lvs_grow_clean 00:08:13.170 ************************************ 00:08:13.170 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:13.170 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:13.170 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.170 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.481 ************************************ 00:08:13.481 START TEST lvs_grow_dirty 00:08:13.481 ************************************ 00:08:13.481 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:13.481 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:13.481 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:13.481 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:13.481 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:13.481 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:13.481 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:13.481 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.481 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.481 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:13.481 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:13.481 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:13.740 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:13.740 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:13.740 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:13.999 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:13.999 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:13.999 19:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 lvol 150 00:08:13.999 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5a159521-0c6e-432e-93c8-3e588a7190cb 00:08:13.999 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.999 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:14.259 [2024-07-24 19:09:00.307418] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:14.259 [2024-07-24 19:09:00.307469] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:14.259 true 00:08:14.259 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:14.259 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:14.259 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:14.259 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:14.518 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5a159521-0c6e-432e-93c8-3e588a7190cb 00:08:14.777 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:14.777 [2024-07-24 19:09:00.961385] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.777 19:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.036 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1389372 00:08:15.037 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:15.037 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.037 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1389372 /var/tmp/bdevperf.sock 00:08:15.037 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1389372 ']' 00:08:15.037 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:15.037 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.037 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:15.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:15.037 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.037 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:15.037 [2024-07-24 19:09:01.181836] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:08:15.037 [2024-07-24 19:09:01.181887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389372 ] 00:08:15.037 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.037 [2024-07-24 19:09:01.251831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.296 [2024-07-24 19:09:01.323908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.864 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.864 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:15.865 19:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:16.124 Nvme0n1 00:08:16.124 19:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:16.382 [ 00:08:16.382 { 00:08:16.382 "name": "Nvme0n1", 00:08:16.382 "aliases": [ 00:08:16.382 "5a159521-0c6e-432e-93c8-3e588a7190cb" 00:08:16.382 ], 00:08:16.382 "product_name": "NVMe disk", 00:08:16.382 "block_size": 4096, 00:08:16.382 "num_blocks": 38912, 00:08:16.382 "uuid": "5a159521-0c6e-432e-93c8-3e588a7190cb", 00:08:16.382 "assigned_rate_limits": { 00:08:16.382 "rw_ios_per_sec": 0, 00:08:16.382 "rw_mbytes_per_sec": 0, 00:08:16.382 "r_mbytes_per_sec": 0, 00:08:16.382 "w_mbytes_per_sec": 0 00:08:16.382 }, 00:08:16.382 "claimed": false, 00:08:16.382 "zoned": false, 00:08:16.382 "supported_io_types": { 00:08:16.382 "read": true, 00:08:16.382 "write": true, 00:08:16.382 "unmap": true, 00:08:16.382 "flush": true, 00:08:16.382 "reset": true, 00:08:16.382 "nvme_admin": true, 00:08:16.382 "nvme_io": true, 00:08:16.382 "nvme_io_md": false, 00:08:16.382 "write_zeroes": true, 00:08:16.382 "zcopy": false, 00:08:16.382 "get_zone_info": false, 00:08:16.382 "zone_management": false, 00:08:16.382 "zone_append": false, 00:08:16.382 "compare": true, 00:08:16.382 "compare_and_write": true, 00:08:16.382 "abort": true, 00:08:16.382 "seek_hole": false, 00:08:16.382 "seek_data": false, 00:08:16.382 "copy": true, 00:08:16.382 "nvme_iov_md": false 00:08:16.382 }, 00:08:16.382 "memory_domains": [ 00:08:16.382 { 00:08:16.382 "dma_device_id": "system", 00:08:16.382 "dma_device_type": 1 00:08:16.382 } 00:08:16.382 ], 00:08:16.382 "driver_specific": { 00:08:16.382 "nvme": [ 00:08:16.382 { 00:08:16.382 "trid": { 00:08:16.382 "trtype": "TCP", 00:08:16.382 "adrfam": "IPv4", 00:08:16.382 "traddr": "10.0.0.2", 00:08:16.382 "trsvcid": "4420", 00:08:16.382 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:16.382 }, 00:08:16.382 "ctrlr_data": { 00:08:16.382 "cntlid": 1, 00:08:16.382 "vendor_id": "0x8086", 00:08:16.382 "model_number": "SPDK bdev Controller", 00:08:16.382 "serial_number": "SPDK0", 00:08:16.382 "firmware_revision": "24.09", 00:08:16.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:16.382 "oacs": { 00:08:16.382 "security": 0, 00:08:16.382 "format": 0, 00:08:16.382 "firmware": 0, 00:08:16.382 "ns_manage": 0 00:08:16.382 }, 00:08:16.382 "multi_ctrlr": true, 00:08:16.382 "ana_reporting": false 00:08:16.382 }, 00:08:16.382 "vs": { 00:08:16.382 "nvme_version": "1.3" 00:08:16.382 }, 00:08:16.382 "ns_data": { 00:08:16.382 "id": 1, 00:08:16.382 "can_share": true 00:08:16.382 } 00:08:16.382 } 00:08:16.382 ], 00:08:16.382 "mp_policy": "active_passive" 00:08:16.382 } 00:08:16.382 } 00:08:16.382 ] 00:08:16.382 19:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1389677 00:08:16.382 19:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:16.382 19:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:16.382 Running I/O for 10 seconds... 00:08:17.317 Latency(us) 00:08:17.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.317 Nvme0n1 : 1.00 22977.00 89.75 0.00 0.00 0.00 0.00 0.00 00:08:17.317 =================================================================================================================== 00:08:17.317 Total : 22977.00 89.75 0.00 0.00 0.00 0.00 0.00 00:08:17.317 00:08:18.251 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:18.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.251 Nvme0n1 : 2.00 23100.50 90.24 0.00 0.00 0.00 0.00 0.00 00:08:18.251 =================================================================================================================== 00:08:18.251 Total : 23100.50 90.24 0.00 0.00 0.00 0.00 0.00 00:08:18.251 00:08:18.510 true 00:08:18.510 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:18.510 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:18.510 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:18.510 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:18.510 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1389677 00:08:19.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.446 Nvme0n1 : 3.00 23160.33 90.47 0.00 0.00 0.00 0.00 0.00 00:08:19.446 =================================================================================================================== 00:08:19.446 Total : 23160.33 90.47 0.00 0.00 0.00 0.00 0.00 00:08:19.446 00:08:20.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.382 Nvme0n1 : 4.00 23228.25 90.74 0.00 0.00 0.00 0.00 0.00 00:08:20.382 =================================================================================================================== 00:08:20.382 Total : 23228.25 90.74 0.00 0.00 0.00 0.00 0.00 00:08:20.382 00:08:21.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.316 Nvme0n1 : 5.00 23289.80 90.98 0.00 0.00 0.00 0.00 0.00 00:08:21.316 =================================================================================================================== 00:08:21.316 Total : 23289.80 90.98 0.00 0.00 0.00 0.00 0.00 00:08:21.316 00:08:22.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.252 Nvme0n1 : 6.00 23337.50 91.16 0.00 0.00 0.00 0.00 0.00 00:08:22.252 =================================================================================================================== 00:08:22.252 Total : 23337.50 91.16 0.00 0.00 0.00 0.00 0.00 00:08:22.252 00:08:23.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.631 Nvme0n1 : 7.00 23371.57 91.30 0.00 0.00 0.00 0.00 0.00 00:08:23.631 =================================================================================================================== 00:08:23.631 Total : 23371.57 91.30 0.00 0.00 0.00 0.00 0.00 00:08:23.631 00:08:24.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.569 Nvme0n1 : 8.00 23386.12 91.35 0.00 0.00 0.00 0.00 0.00 00:08:24.569 =================================================================================================================== 00:08:24.569 Total : 23386.12 91.35 0.00 0.00 0.00 0.00 0.00 00:08:24.569 00:08:25.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.503 Nvme0n1 : 9.00 23410.78 91.45 0.00 0.00 0.00 0.00 0.00 00:08:25.503 =================================================================================================================== 00:08:25.503 Total : 23410.78 91.45 0.00 0.00 0.00 0.00 0.00 00:08:25.503 00:08:26.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.440 Nvme0n1 : 10.00 23432.90 91.53 0.00 0.00 0.00 0.00 0.00 00:08:26.440 =================================================================================================================== 00:08:26.440 Total : 23432.90 91.53 0.00 0.00 0.00 0.00 0.00 00:08:26.440 00:08:26.440 00:08:26.440 Latency(us) 00:08:26.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.440 Nvme0n1 : 10.01 23432.94 91.53 0.00 0.00 5458.62 4141.88 13946.06 00:08:26.440 =================================================================================================================== 00:08:26.440 Total : 23432.94 91.53 0.00 0.00 5458.62 4141.88 13946.06 00:08:26.440 0 00:08:26.440 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1389372 00:08:26.440 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1389372 ']' 00:08:26.440 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1389372 00:08:26.440 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:26.440 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.440 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1389372 00:08:26.440 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:26.440 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:26.440 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1389372' 00:08:26.440 killing process with pid 1389372 00:08:26.440 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1389372 00:08:26.440 Received shutdown signal, test time was about 10.000000 seconds 00:08:26.440 00:08:26.440 Latency(us) 00:08:26.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.440 =================================================================================================================== 00:08:26.440 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:26.440 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1389372 00:08:26.699 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.699 19:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:26.958 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:26.958 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:27.217 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:27.217 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1386071 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1386071 00:08:27.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1386071 Killed "${NVMF_APP[@]}" "$@" 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1391943 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1391943 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1391943 ']' 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.218 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:27.218 [2024-07-24 19:09:13.348422] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:08:27.218 [2024-07-24 19:09:13.348475] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.218 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.218 [2024-07-24 19:09:13.421489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.477 [2024-07-24 19:09:13.492982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.477 [2024-07-24 19:09:13.493021] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.477 [2024-07-24 19:09:13.493034] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.477 [2024-07-24 19:09:13.493046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.477 [2024-07-24 19:09:13.493055] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.477 [2024-07-24 19:09:13.493082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.045 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.045 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:28.045 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.045 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:28.045 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.045 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.045 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:28.304 [2024-07-24 19:09:14.322387] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:28.304 [2024-07-24 19:09:14.322481] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:28.304 [2024-07-24 19:09:14.322513] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:28.305 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:28.305 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5a159521-0c6e-432e-93c8-3e588a7190cb 00:08:28.305 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=5a159521-0c6e-432e-93c8-3e588a7190cb 00:08:28.305 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:28.305 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:28.305 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:28.305 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:28.305 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:28.305 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5a159521-0c6e-432e-93c8-3e588a7190cb -t 2000 00:08:28.564 [ 00:08:28.564 { 00:08:28.564 "name": "5a159521-0c6e-432e-93c8-3e588a7190cb", 00:08:28.564 "aliases": [ 00:08:28.564 "lvs/lvol" 00:08:28.564 ], 00:08:28.564 "product_name": "Logical Volume", 00:08:28.564 "block_size": 4096, 00:08:28.564 "num_blocks": 38912, 00:08:28.564 "uuid": "5a159521-0c6e-432e-93c8-3e588a7190cb", 00:08:28.564 "assigned_rate_limits": { 00:08:28.564 "rw_ios_per_sec": 0, 00:08:28.564 "rw_mbytes_per_sec": 0, 00:08:28.564 "r_mbytes_per_sec": 0, 00:08:28.564 "w_mbytes_per_sec": 0 00:08:28.564 }, 00:08:28.564 "claimed": false, 00:08:28.564 "zoned": false, 00:08:28.564 "supported_io_types": { 00:08:28.564 "read": true, 00:08:28.564 "write": true, 00:08:28.564 "unmap": true, 00:08:28.564 "flush": false, 00:08:28.564 "reset": true, 00:08:28.564 "nvme_admin": false, 00:08:28.564 "nvme_io": false, 00:08:28.564 "nvme_io_md": false, 00:08:28.564 "write_zeroes": true, 00:08:28.564 "zcopy": false, 00:08:28.564 "get_zone_info": false, 00:08:28.564 "zone_management": false, 00:08:28.564 "zone_append": false, 00:08:28.564 "compare": false, 00:08:28.564 "compare_and_write": false, 00:08:28.564 "abort": false, 00:08:28.564 "seek_hole": true, 00:08:28.564 "seek_data": true, 00:08:28.564 "copy": false, 00:08:28.564 "nvme_iov_md": false 00:08:28.564 }, 00:08:28.564 "driver_specific": { 00:08:28.564 "lvol": { 00:08:28.564 "lvol_store_uuid": "31c866fe-f423-4d7a-a16e-93c9c4e079b7", 00:08:28.564 "base_bdev": "aio_bdev", 00:08:28.564 "thin_provision": false, 00:08:28.564 "num_allocated_clusters": 38, 00:08:28.564 "snapshot": false, 00:08:28.564 "clone": false, 00:08:28.564 "esnap_clone": false 00:08:28.564 } 00:08:28.564 } 00:08:28.564 } 00:08:28.564 ] 00:08:28.564 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:28.564 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:28.564 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:28.823 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:28.823 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:28.823 19:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:28.823 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:28.823 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:29.116 [2024-07-24 19:09:15.170699] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:29.116 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:29.116 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:29.116 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:29.116 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.116 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.116 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.116 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.116 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.116 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.116 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.116 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:29.116 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:29.383 request: 00:08:29.383 { 00:08:29.383 "uuid": "31c866fe-f423-4d7a-a16e-93c9c4e079b7", 00:08:29.383 "method": "bdev_lvol_get_lvstores", 00:08:29.383 "req_id": 1 00:08:29.383 } 00:08:29.383 Got JSON-RPC error response 00:08:29.383 response: 00:08:29.383 { 00:08:29.383 "code": -19, 00:08:29.383 "message": "No such device" 00:08:29.383 } 00:08:29.383 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:29.383 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:29.383 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:29.383 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:29.383 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.383 aio_bdev 00:08:29.383 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5a159521-0c6e-432e-93c8-3e588a7190cb 00:08:29.383 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=5a159521-0c6e-432e-93c8-3e588a7190cb 00:08:29.383 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.383 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:29.383 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.383 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.383 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:29.642 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5a159521-0c6e-432e-93c8-3e588a7190cb -t 2000 00:08:29.901 [ 00:08:29.901 { 00:08:29.901 "name": "5a159521-0c6e-432e-93c8-3e588a7190cb", 00:08:29.901 "aliases": [ 00:08:29.901 "lvs/lvol" 00:08:29.901 ], 00:08:29.901 "product_name": "Logical Volume", 00:08:29.901 "block_size": 4096, 00:08:29.901 "num_blocks": 38912, 00:08:29.901 "uuid": "5a159521-0c6e-432e-93c8-3e588a7190cb", 00:08:29.901 "assigned_rate_limits": { 00:08:29.901 "rw_ios_per_sec": 0, 00:08:29.901 "rw_mbytes_per_sec": 0, 00:08:29.901 "r_mbytes_per_sec": 0, 00:08:29.901 "w_mbytes_per_sec": 0 00:08:29.901 }, 00:08:29.901 "claimed": false, 00:08:29.901 "zoned": false, 00:08:29.901 "supported_io_types": { 00:08:29.901 "read": true, 00:08:29.901 "write": true, 00:08:29.902 "unmap": true, 00:08:29.902 "flush": false, 00:08:29.902 "reset": true, 00:08:29.902 "nvme_admin": false, 00:08:29.902 "nvme_io": false, 00:08:29.902 "nvme_io_md": false, 00:08:29.902 "write_zeroes": true, 00:08:29.902 "zcopy": false, 00:08:29.902 "get_zone_info": false, 00:08:29.902 "zone_management": false, 00:08:29.902 "zone_append": false, 00:08:29.902 "compare": false, 00:08:29.902 "compare_and_write": false, 00:08:29.902 "abort": false, 00:08:29.902 "seek_hole": true, 00:08:29.902 "seek_data": true, 00:08:29.902 "copy": false, 00:08:29.902 "nvme_iov_md": false 00:08:29.902 }, 00:08:29.902 "driver_specific": { 00:08:29.902 "lvol": { 00:08:29.902 "lvol_store_uuid": "31c866fe-f423-4d7a-a16e-93c9c4e079b7", 00:08:29.902 "base_bdev": "aio_bdev", 00:08:29.902 "thin_provision": false, 00:08:29.902 "num_allocated_clusters": 38, 00:08:29.902 "snapshot": false, 00:08:29.902 "clone": false, 00:08:29.902 "esnap_clone": false 00:08:29.902 } 00:08:29.902 } 00:08:29.902 } 00:08:29.902 ] 00:08:29.902 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:29.902 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:29.902 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:29.902 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:29.902 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:29.902 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:30.161 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:30.161 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5a159521-0c6e-432e-93c8-3e588a7190cb 00:08:30.421 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31c866fe-f423-4d7a-a16e-93c9c4e079b7 00:08:30.421 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.680 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:30.680 00:08:30.680 real 0m17.438s 00:08:30.680 user 0m43.291s 00:08:30.680 sys 0m5.084s 00:08:30.680 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.680 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:30.680 ************************************ 00:08:30.680 END TEST lvs_grow_dirty 00:08:30.680 ************************************ 00:08:30.680 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:30.680 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:30.681 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:30.681 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:30.681 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:30.681 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:30.681 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:30.681 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:30.681 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:30.681 nvmf_trace.0 00:08:30.940 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:30.940 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:30.940 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.940 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:30.940 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.940 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:30.940 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.940 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.940 rmmod nvme_tcp 00:08:30.940 rmmod nvme_fabrics 00:08:30.940 rmmod nvme_keyring 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1391943 ']' 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1391943 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1391943 ']' 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1391943 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1391943 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1391943' 00:08:30.940 killing process with pid 1391943 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1391943 00:08:30.940 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1391943 00:08:31.199 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.199 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.199 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.199 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.199 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.200 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.200 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.200 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.107 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:33.107 00:08:33.107 real 0m43.874s 00:08:33.107 user 1m4.057s 00:08:33.107 sys 0m13.014s 00:08:33.107 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.107 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:33.107 ************************************ 00:08:33.107 END TEST nvmf_lvs_grow 00:08:33.107 ************************************ 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.367 ************************************ 00:08:33.367 START TEST nvmf_bdev_io_wait 00:08:33.367 ************************************ 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:33.367 * Looking for test storage... 00:08:33.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.367 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:41.487 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.487 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:41.488 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:41.488 Found net devices under 0000:af:00.0: cvl_0_0 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:41.488 Found net devices under 0000:af:00.1: cvl_0_1 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:41.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:08:41.488 00:08:41.488 --- 10.0.0.2 ping statistics --- 00:08:41.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.488 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:08:41.488 00:08:41.488 --- 10.0.0.1 ping statistics --- 00:08:41.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.488 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1396471 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1396471 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1396471 ']' 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.488 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.488 [2024-07-24 19:09:26.672109] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:08:41.488 [2024-07-24 19:09:26.672157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.488 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.488 [2024-07-24 19:09:26.745272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.488 [2024-07-24 19:09:26.815673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.488 [2024-07-24 19:09:26.815724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.488 [2024-07-24 19:09:26.815737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.488 [2024-07-24 19:09:26.815748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.488 [2024-07-24 19:09:26.815758] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.488 [2024-07-24 19:09:26.815824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.489 [2024-07-24 19:09:26.815921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.489 [2024-07-24 19:09:26.815986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.489 [2024-07-24 19:09:26.815990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.489 [2024-07-24 19:09:27.593176] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.489 Malloc0 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.489 [2024-07-24 19:09:27.661014] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1396632 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1396635 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.489 { 00:08:41.489 "params": { 00:08:41.489 "name": "Nvme$subsystem", 00:08:41.489 "trtype": "$TEST_TRANSPORT", 00:08:41.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.489 "adrfam": "ipv4", 00:08:41.489 "trsvcid": "$NVMF_PORT", 00:08:41.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.489 "hdgst": ${hdgst:-false}, 00:08:41.489 "ddgst": ${ddgst:-false} 00:08:41.489 }, 00:08:41.489 "method": "bdev_nvme_attach_controller" 00:08:41.489 } 00:08:41.489 EOF 00:08:41.489 )") 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1396638 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.489 { 00:08:41.489 "params": { 00:08:41.489 "name": "Nvme$subsystem", 00:08:41.489 "trtype": "$TEST_TRANSPORT", 00:08:41.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.489 "adrfam": "ipv4", 00:08:41.489 "trsvcid": "$NVMF_PORT", 00:08:41.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.489 "hdgst": ${hdgst:-false}, 00:08:41.489 "ddgst": ${ddgst:-false} 00:08:41.489 }, 00:08:41.489 "method": "bdev_nvme_attach_controller" 00:08:41.489 } 00:08:41.489 EOF 00:08:41.489 )") 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1396642 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.489 { 00:08:41.489 "params": { 00:08:41.489 "name": "Nvme$subsystem", 00:08:41.489 "trtype": "$TEST_TRANSPORT", 00:08:41.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.489 "adrfam": "ipv4", 00:08:41.489 "trsvcid": "$NVMF_PORT", 00:08:41.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.489 "hdgst": ${hdgst:-false}, 00:08:41.489 "ddgst": ${ddgst:-false} 00:08:41.489 }, 00:08:41.489 "method": "bdev_nvme_attach_controller" 00:08:41.489 } 00:08:41.489 EOF 00:08:41.489 )") 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:41.489 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:41.489 { 00:08:41.489 "params": { 00:08:41.489 "name": "Nvme$subsystem", 00:08:41.489 "trtype": "$TEST_TRANSPORT", 00:08:41.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.489 "adrfam": "ipv4", 00:08:41.489 "trsvcid": "$NVMF_PORT", 00:08:41.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.489 "hdgst": ${hdgst:-false}, 00:08:41.489 "ddgst": ${ddgst:-false} 00:08:41.489 }, 00:08:41.489 "method": "bdev_nvme_attach_controller" 00:08:41.489 } 00:08:41.489 EOF 00:08:41.490 )") 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1396632 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.490 "params": { 00:08:41.490 "name": "Nvme1", 00:08:41.490 "trtype": "tcp", 00:08:41.490 "traddr": "10.0.0.2", 00:08:41.490 "adrfam": "ipv4", 00:08:41.490 "trsvcid": "4420", 00:08:41.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.490 "hdgst": false, 00:08:41.490 "ddgst": false 00:08:41.490 }, 00:08:41.490 "method": "bdev_nvme_attach_controller" 00:08:41.490 }' 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.490 "params": { 00:08:41.490 "name": "Nvme1", 00:08:41.490 "trtype": "tcp", 00:08:41.490 "traddr": "10.0.0.2", 00:08:41.490 "adrfam": "ipv4", 00:08:41.490 "trsvcid": "4420", 00:08:41.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.490 "hdgst": false, 00:08:41.490 "ddgst": false 00:08:41.490 }, 00:08:41.490 "method": "bdev_nvme_attach_controller" 00:08:41.490 }' 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.490 "params": { 00:08:41.490 "name": "Nvme1", 00:08:41.490 "trtype": "tcp", 00:08:41.490 "traddr": "10.0.0.2", 00:08:41.490 "adrfam": "ipv4", 00:08:41.490 "trsvcid": "4420", 00:08:41.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.490 "hdgst": false, 00:08:41.490 "ddgst": false 00:08:41.490 }, 00:08:41.490 "method": "bdev_nvme_attach_controller" 00:08:41.490 }' 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:41.490 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:41.490 "params": { 00:08:41.490 "name": "Nvme1", 00:08:41.490 "trtype": "tcp", 00:08:41.490 "traddr": "10.0.0.2", 00:08:41.490 "adrfam": "ipv4", 00:08:41.490 "trsvcid": "4420", 00:08:41.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.490 "hdgst": false, 00:08:41.490 "ddgst": false 00:08:41.490 }, 00:08:41.490 "method": "bdev_nvme_attach_controller" 00:08:41.490 }' 00:08:41.490 [2024-07-24 19:09:27.712402] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:08:41.490 [2024-07-24 19:09:27.712455] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:41.490 [2024-07-24 19:09:27.715578] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:08:41.490 [2024-07-24 19:09:27.715581] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:08:41.490 [2024-07-24 19:09:27.715626] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 19:09:27.715626] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:41.490 --proc-type=auto ] 00:08:41.490 [2024-07-24 19:09:27.718973] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:08:41.490 [2024-07-24 19:09:27.719021] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:41.748 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.748 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.748 [2024-07-24 19:09:27.901074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.748 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.748 [2024-07-24 19:09:27.976659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.748 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.748 [2024-07-24 19:09:27.986931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:42.005 [2024-07-24 19:09:28.031148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.005 [2024-07-24 19:09:28.052703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:42.005 [2024-07-24 19:09:28.102031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:42.005 [2024-07-24 19:09:28.129999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.005 [2024-07-24 19:09:28.222357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:42.262 Running I/O for 1 seconds... 00:08:42.262 Running I/O for 1 seconds... 00:08:42.262 Running I/O for 1 seconds... 00:08:42.262 Running I/O for 1 seconds... 00:08:43.193 00:08:43.193 Latency(us) 00:08:43.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.193 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:43.193 Nvme1n1 : 1.00 258127.21 1008.31 0.00 0.00 494.27 204.80 625.87 00:08:43.193 =================================================================================================================== 00:08:43.193 Total : 258127.21 1008.31 0.00 0.00 494.27 204.80 625.87 00:08:43.193 00:08:43.193 Latency(us) 00:08:43.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.193 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:43.194 Nvme1n1 : 1.02 8891.27 34.73 0.00 0.00 14267.32 3670.02 24012.39 00:08:43.194 =================================================================================================================== 00:08:43.194 Total : 8891.27 34.73 0.00 0.00 14267.32 3670.02 24012.39 00:08:43.452 00:08:43.452 Latency(us) 00:08:43.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.452 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:43.452 Nvme1n1 : 1.01 11719.91 45.78 0.00 0.00 10881.12 3106.41 18350.08 00:08:43.452 =================================================================================================================== 00:08:43.452 Total : 11719.91 45.78 0.00 0.00 10881.12 3106.41 18350.08 00:08:43.452 00:08:43.452 Latency(us) 00:08:43.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.452 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:43.452 Nvme1n1 : 1.00 8365.28 32.68 0.00 0.00 15261.09 4902.09 36071.01 00:08:43.452 =================================================================================================================== 00:08:43.452 Total : 8365.28 32.68 0.00 0.00 15261.09 4902.09 36071.01 00:08:43.453 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1396635 00:08:43.453 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1396638 00:08:43.453 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1396642 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.711 rmmod nvme_tcp 00:08:43.711 rmmod nvme_fabrics 00:08:43.711 rmmod nvme_keyring 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1396471 ']' 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1396471 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1396471 ']' 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1396471 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1396471 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1396471' 00:08:43.711 killing process with pid 1396471 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1396471 00:08:43.711 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1396471 00:08:43.971 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:43.971 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:43.971 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:43.971 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.971 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:43.971 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.971 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.971 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.874 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:45.874 00:08:45.874 real 0m12.688s 00:08:45.874 user 0m20.040s 00:08:45.874 sys 0m7.287s 00:08:45.874 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.874 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.874 ************************************ 00:08:45.874 END TEST nvmf_bdev_io_wait 00:08:45.874 ************************************ 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.133 ************************************ 00:08:46.133 START TEST nvmf_queue_depth 00:08:46.133 ************************************ 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:46.133 * Looking for test storage... 00:08:46.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:46.133 19:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:54.256 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:54.256 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:54.256 Found net devices under 0000:af:00.0: cvl_0_0 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:54.256 Found net devices under 0000:af:00.1: cvl_0_1 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.256 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.257 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.257 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.257 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.257 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.257 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.257 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.257 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:08:54.257 00:08:54.257 --- 10.0.0.2 ping statistics --- 00:08:54.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.257 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:08:54.257 00:08:54.257 --- 10.0.0.1 ping statistics --- 00:08:54.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.257 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1400741 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1400741 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1400741 ']' 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.257 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.257 [2024-07-24 19:09:39.367860] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:08:54.257 [2024-07-24 19:09:39.367909] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.257 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.257 [2024-07-24 19:09:39.441234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.257 [2024-07-24 19:09:39.509576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.257 [2024-07-24 19:09:39.509618] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.257 [2024-07-24 19:09:39.509627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.257 [2024-07-24 19:09:39.509635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.257 [2024-07-24 19:09:39.509658] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.257 [2024-07-24 19:09:39.509680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.257 [2024-07-24 19:09:40.214623] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.257 Malloc0 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.257 [2024-07-24 19:09:40.277422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1401031 00:08:54.257 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.258 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1401031 /var/tmp/bdevperf.sock 00:08:54.258 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1401031 ']' 00:08:54.258 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.258 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.258 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.258 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.258 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:54.258 19:09:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.258 [2024-07-24 19:09:40.330898] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:08:54.258 [2024-07-24 19:09:40.330943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401031 ] 00:08:54.258 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.258 [2024-07-24 19:09:40.401146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.258 [2024-07-24 19:09:40.472112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.281 19:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.281 19:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:55.281 19:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:55.281 19:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.281 19:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.281 NVMe0n1 00:08:55.281 19:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.281 19:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.281 Running I/O for 10 seconds... 00:09:05.275 00:09:05.275 Latency(us) 00:09:05.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.275 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:05.275 Verification LBA range: start 0x0 length 0x4000 00:09:05.275 NVMe0n1 : 10.06 13025.85 50.88 0.00 0.00 78384.07 18874.37 53687.09 00:09:05.275 =================================================================================================================== 00:09:05.275 Total : 13025.85 50.88 0.00 0.00 78384.07 18874.37 53687.09 00:09:05.275 0 00:09:05.275 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1401031 00:09:05.275 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1401031 ']' 00:09:05.275 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1401031 00:09:05.275 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:05.275 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1401031 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1401031' 00:09:05.535 killing process with pid 1401031 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1401031 00:09:05.535 Received shutdown signal, test time was about 10.000000 seconds 00:09:05.535 00:09:05.535 Latency(us) 00:09:05.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.535 =================================================================================================================== 00:09:05.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1401031 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:05.535 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:05.535 rmmod nvme_tcp 00:09:05.535 rmmod nvme_fabrics 00:09:05.795 rmmod nvme_keyring 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1400741 ']' 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1400741 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1400741 ']' 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1400741 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1400741 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1400741' 00:09:05.795 killing process with pid 1400741 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1400741 00:09:05.795 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1400741 00:09:06.055 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.055 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.055 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.055 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.055 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.055 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.055 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.055 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.961 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:07.961 00:09:07.961 real 0m21.954s 00:09:07.961 user 0m24.996s 00:09:07.961 sys 0m7.311s 00:09:07.961 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.961 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:07.961 ************************************ 00:09:07.961 END TEST nvmf_queue_depth 00:09:07.961 ************************************ 00:09:07.961 19:09:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:07.961 19:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:07.961 19:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.961 19:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.221 ************************************ 00:09:08.222 START TEST nvmf_target_multipath 00:09:08.222 ************************************ 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:08.222 * Looking for test storage... 00:09:08.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.222 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:14.797 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:14.797 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.797 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:14.798 Found net devices under 0000:af:00.0: cvl_0_0 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:14.798 Found net devices under 0000:af:00.1: cvl_0_1 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.798 19:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.798 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.798 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:14.798 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:15.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:09:15.057 00:09:15.057 --- 10.0.0.2 ping statistics --- 00:09:15.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.057 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:09:15.057 00:09:15.057 --- 10.0.0.1 ping statistics --- 00:09:15.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.057 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:15.057 only one NIC for nvmf test 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:15.057 rmmod nvme_tcp 00:09:15.057 rmmod nvme_fabrics 00:09:15.057 rmmod nvme_keyring 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.057 19:10:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:17.592 00:09:17.592 real 0m9.161s 00:09:17.592 user 0m1.871s 00:09:17.592 sys 0m5.312s 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:17.592 ************************************ 00:09:17.592 END TEST nvmf_target_multipath 00:09:17.592 ************************************ 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:17.592 ************************************ 00:09:17.592 START TEST nvmf_zcopy 00:09:17.592 ************************************ 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:17.592 * Looking for test storage... 00:09:17.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.592 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:17.593 19:10:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.163 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:24.164 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:24.164 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:24.164 Found net devices under 0000:af:00.0: cvl_0_0 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:24.164 Found net devices under 0000:af:00.1: cvl_0_1 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.164 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.165 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.165 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.165 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.165 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.165 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.165 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.165 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.165 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.165 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.165 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.165 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.165 19:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:09:24.165 00:09:24.165 --- 10.0.0.2 ping statistics --- 00:09:24.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.165 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:09:24.165 00:09:24.165 --- 10.0.0.1 ping statistics --- 00:09:24.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.165 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1410242 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1410242 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1410242 ']' 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.165 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.165 [2024-07-24 19:10:10.121871] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:24.165 [2024-07-24 19:10:10.121918] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.165 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.165 [2024-07-24 19:10:10.194295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.165 [2024-07-24 19:10:10.259888] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.165 [2024-07-24 19:10:10.259931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.165 [2024-07-24 19:10:10.259940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.165 [2024-07-24 19:10:10.259948] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.165 [2024-07-24 19:10:10.259955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.165 [2024-07-24 19:10:10.259975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.734 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.734 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:24.734 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.734 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.734 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.734 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.734 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:24.734 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:24.734 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.735 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.735 [2024-07-24 19:10:10.958031] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.735 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.735 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:24.735 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.735 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.735 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.735 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.735 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.735 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.994 [2024-07-24 19:10:10.974197] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.994 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.994 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.994 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.994 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.994 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.994 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:24.994 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.994 19:10:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.994 malloc0 00:09:24.994 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.994 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:24.994 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.994 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.994 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.995 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:24.995 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:24.995 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:24.995 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.995 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.995 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.995 { 00:09:24.995 "params": { 00:09:24.995 "name": "Nvme$subsystem", 00:09:24.995 "trtype": "$TEST_TRANSPORT", 00:09:24.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.995 "adrfam": "ipv4", 00:09:24.995 "trsvcid": "$NVMF_PORT", 00:09:24.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.995 "hdgst": ${hdgst:-false}, 00:09:24.995 "ddgst": ${ddgst:-false} 00:09:24.995 }, 00:09:24.995 "method": "bdev_nvme_attach_controller" 00:09:24.995 } 00:09:24.995 EOF 00:09:24.995 )") 00:09:24.995 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:24.995 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:24.995 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:24.995 19:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.995 "params": { 00:09:24.995 "name": "Nvme1", 00:09:24.995 "trtype": "tcp", 00:09:24.995 "traddr": "10.0.0.2", 00:09:24.995 "adrfam": "ipv4", 00:09:24.995 "trsvcid": "4420", 00:09:24.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.995 "hdgst": false, 00:09:24.995 "ddgst": false 00:09:24.995 }, 00:09:24.995 "method": "bdev_nvme_attach_controller" 00:09:24.995 }' 00:09:24.995 [2024-07-24 19:10:11.064707] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:24.995 [2024-07-24 19:10:11.064761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410507 ] 00:09:24.995 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.995 [2024-07-24 19:10:11.134406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.995 [2024-07-24 19:10:11.203810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.254 Running I/O for 10 seconds... 00:09:35.288 00:09:35.288 Latency(us) 00:09:35.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.288 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:35.288 Verification LBA range: start 0x0 length 0x1000 00:09:35.288 Nvme1n1 : 10.01 8957.25 69.98 0.00 0.00 14249.46 573.44 29779.56 00:09:35.288 =================================================================================================================== 00:09:35.288 Total : 8957.25 69.98 0.00 0.00 14249.46 573.44 29779.56 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1412241 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:35.548 { 00:09:35.548 "params": { 00:09:35.548 "name": "Nvme$subsystem", 00:09:35.548 "trtype": "$TEST_TRANSPORT", 00:09:35.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.548 "adrfam": "ipv4", 00:09:35.548 "trsvcid": "$NVMF_PORT", 00:09:35.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.548 "hdgst": ${hdgst:-false}, 00:09:35.548 "ddgst": ${ddgst:-false} 00:09:35.548 }, 00:09:35.548 "method": "bdev_nvme_attach_controller" 00:09:35.548 } 00:09:35.548 EOF 00:09:35.548 )") 00:09:35.548 [2024-07-24 19:10:21.619385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.548 [2024-07-24 19:10:21.619422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:35.548 [2024-07-24 19:10:21.627374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.548 [2024-07-24 19:10:21.627388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:35.548 19:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:35.548 "params": { 00:09:35.548 "name": "Nvme1", 00:09:35.548 "trtype": "tcp", 00:09:35.548 "traddr": "10.0.0.2", 00:09:35.548 "adrfam": "ipv4", 00:09:35.548 "trsvcid": "4420", 00:09:35.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.548 "hdgst": false, 00:09:35.548 "ddgst": false 00:09:35.548 }, 00:09:35.548 "method": "bdev_nvme_attach_controller" 00:09:35.548 }' 00:09:35.548 [2024-07-24 19:10:21.635390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.548 [2024-07-24 19:10:21.635403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.548 [2024-07-24 19:10:21.643410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.548 [2024-07-24 19:10:21.643421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.548 [2024-07-24 19:10:21.651431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.548 [2024-07-24 19:10:21.651443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.548 [2024-07-24 19:10:21.659452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.548 [2024-07-24 19:10:21.659470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.548 [2024-07-24 19:10:21.663272] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:35.549 [2024-07-24 19:10:21.663319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412241 ] 00:09:35.549 [2024-07-24 19:10:21.667475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.667487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.675495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.675507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.683516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.683527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.691536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.691548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.549 [2024-07-24 19:10:21.699558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.699570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.707578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.707590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.715600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.715612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.723621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.723632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.731641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.731652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.733778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.549 [2024-07-24 19:10:21.739662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.739674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.747684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.747697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.755703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.755719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.763731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.763743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.771757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.771780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.549 [2024-07-24 19:10:21.779772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.549 [2024-07-24 19:10:21.779786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.787792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.787803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.795812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.795824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.803834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.803846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.805628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.810 [2024-07-24 19:10:21.811857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.811869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.819888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.819906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.831918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.831937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.839937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.839952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.847957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.847971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.855977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.855989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.863999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.864013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.872019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.872032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.880039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.880050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.888081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.888099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.896097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.896115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.904107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.904121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.912136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.912150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.920156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.920167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.928175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.928186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.936197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.936207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.944219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.944237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.952243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.952254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.960267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.960281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.968289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.968302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.810 [2024-07-24 19:10:21.976308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.810 [2024-07-24 19:10:21.976319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.811 [2024-07-24 19:10:21.984331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.811 [2024-07-24 19:10:21.984341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.811 [2024-07-24 19:10:21.992352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.811 [2024-07-24 19:10:21.992363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.811 [2024-07-24 19:10:22.000375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.811 [2024-07-24 19:10:22.000386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.811 [2024-07-24 19:10:22.008399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.811 [2024-07-24 19:10:22.008412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.811 [2024-07-24 19:10:22.016422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.811 [2024-07-24 19:10:22.016435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.811 [2024-07-24 19:10:22.024442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.811 [2024-07-24 19:10:22.024453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.811 [2024-07-24 19:10:22.032463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.811 [2024-07-24 19:10:22.032475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.811 [2024-07-24 19:10:22.040486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.811 [2024-07-24 19:10:22.040496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.070 [2024-07-24 19:10:22.048507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.048518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.056531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.056545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.064552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.064562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.072573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.072583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.080597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.080607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.088617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.088628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.096639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.096654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.104659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.104670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.112707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.112730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 Running I/O for 5 seconds... 00:09:36.071 [2024-07-24 19:10:22.120720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.120733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.133509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.133531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.144771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.144791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.153421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.153440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.161973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.161992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.171316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.171334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.180274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.180293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.188273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.188292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.197317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.197335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.205596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.205614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.212182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.212200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.222891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.222910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.231270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.231289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.239633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.239651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.247959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.247978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.257381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.257400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.266080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.266100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.274520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.274539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.282734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.282753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.292061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.292080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.071 [2024-07-24 19:10:22.300954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.071 [2024-07-24 19:10:22.300972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.310093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.310112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.318460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.318479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.327418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.327436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.335999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.336018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.344318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.344336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.352594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.352612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.361047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.361065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.369519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.369538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.378138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.378157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.386473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.386492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.395165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.395184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.403557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.403576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.412276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.412295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.420575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.420593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.428959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.428978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.437858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.437877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.446066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.446084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.455447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.455465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.463656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.463675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.472633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.472652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.481431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.481450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.489694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.489713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.497945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.497964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.506622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.506640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.516061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.516080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.524297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.524316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.533228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.533246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.542099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.542117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.331 [2024-07-24 19:10:22.550438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.331 [2024-07-24 19:10:22.550455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.332 [2024-07-24 19:10:22.559258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.332 [2024-07-24 19:10:22.559277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.332 [2024-07-24 19:10:22.567881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.332 [2024-07-24 19:10:22.567900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.576493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.576512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.585456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.585475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.594034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.594052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.601910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.601929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.611643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.611662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.620426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.620444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.629030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.629048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.637212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.637231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.646029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.646048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.654559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.654579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.663856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.663875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.672938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.672958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.681285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.681305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.591 [2024-07-24 19:10:22.689548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.591 [2024-07-24 19:10:22.689566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.698414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.698433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.707453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.707472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.716457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.716475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.725374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.725392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.733485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.733503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.741870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.741888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.750299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.750321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.758661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.758679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.767835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.767854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.776287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.776306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.785181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.785200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.794223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.794242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.802979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.802997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.811823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.811842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.820600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.820619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.592 [2024-07-24 19:10:22.829366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.592 [2024-07-24 19:10:22.829385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.836338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.836357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.846080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.846098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.854881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.854899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.863235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.863254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.870061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.870080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.880953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.880973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.889615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.889635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.898086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.898105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.906437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.906456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.914635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.914659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.923459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.923479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.931850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.931870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.940044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.940063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.948987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.949006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.957830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.957849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.966321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.966340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.975287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.975307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.983979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.983998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:22.992980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:22.992999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:23.001599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:23.001618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:23.010055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:23.010075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:23.018878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:23.018897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:23.027148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:23.027166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:23.035720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:23.035739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:23.044088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:23.044107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:23.053022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:23.053042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:23.061277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:23.061296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:23.070005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:23.070024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:23.078467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:23.078491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.852 [2024-07-24 19:10:23.087516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.852 [2024-07-24 19:10:23.087535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.112 [2024-07-24 19:10:23.096438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.112 [2024-07-24 19:10:23.096457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.112 [2024-07-24 19:10:23.105824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.112 [2024-07-24 19:10:23.105843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.112 [2024-07-24 19:10:23.114175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.112 [2024-07-24 19:10:23.114194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.112 [2024-07-24 19:10:23.122510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.112 [2024-07-24 19:10:23.122529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.112 [2024-07-24 19:10:23.130708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.112 [2024-07-24 19:10:23.130733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.112 [2024-07-24 19:10:23.139384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.112 [2024-07-24 19:10:23.139403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.112 [2024-07-24 19:10:23.148965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.112 [2024-07-24 19:10:23.148985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.112 [2024-07-24 19:10:23.157192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.157211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.166221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.166241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.175209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.175228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.184026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.184046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.193354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.193374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.201772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.201791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.210424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.210443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.218609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.218628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.226873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.226891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.235859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.235878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.244605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.244628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.253489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.253508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.262518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.262537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.269159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.269178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.279521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.279540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.288483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.288502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.297263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.297282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.305708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.305732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.314771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.314790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.323614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.323633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.332341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.332360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.340682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.340702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.113 [2024-07-24 19:10:23.349125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.113 [2024-07-24 19:10:23.349145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.357947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.357966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.366376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.366396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.375893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.375913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.384061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.384080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.392415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.392433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.400780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.400799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.409602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.409621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.418673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.418693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.427815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.427835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.436150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.436168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.444849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.444868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.452847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.452866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.461294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.461312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.470146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.470165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.479298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.479316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.488350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.488369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.497372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.497391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.506025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.506045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.515270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.515289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.524841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.524860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.533046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.533064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.541367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.541386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.549540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.549558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.558068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.372 [2024-07-24 19:10:23.558086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.372 [2024-07-24 19:10:23.566798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.373 [2024-07-24 19:10:23.566816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.373 [2024-07-24 19:10:23.574977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.373 [2024-07-24 19:10:23.574995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.373 [2024-07-24 19:10:23.583587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.373 [2024-07-24 19:10:23.583605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.373 [2024-07-24 19:10:23.592488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.373 [2024-07-24 19:10:23.592506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.373 [2024-07-24 19:10:23.600382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.373 [2024-07-24 19:10:23.600401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.373 [2024-07-24 19:10:23.609121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.373 [2024-07-24 19:10:23.609139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.618057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.618076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.626738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.626757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.634797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.634816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.643518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.643537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.651823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.651842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.660293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.660311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.668598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.668616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.677483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.677502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.686114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.686132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.694543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.694561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.703113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.703132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.712060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.712079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.720473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.720491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.729765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.729784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.738088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.738107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.746173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.746192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.754564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.754582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.763676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.763695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.772577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.772595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.781316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.781335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.789630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.789649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.797698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.797720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.806129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.806148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.814662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.814680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.823754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.823773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.832802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.832821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.841708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.841730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.850941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.850959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.859418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.859437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.632 [2024-07-24 19:10:23.868939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.632 [2024-07-24 19:10:23.868957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.877699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.877722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.886258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.886276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.894926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.894945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.903372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.903391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.912847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.912865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.921923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.921942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.931083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.931101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.939944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.939962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.948306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.948325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.956752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.956770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.965684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.965703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.974020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.974038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.982938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.982956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.989687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.989704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:23.999521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:23.999540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:24.007643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:24.007661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:24.015959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:24.015977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:24.024142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:24.024161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:24.033070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:24.033088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:24.041419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.892 [2024-07-24 19:10:24.041437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.892 [2024-07-24 19:10:24.050197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.893 [2024-07-24 19:10:24.050216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.893 [2024-07-24 19:10:24.058466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.893 [2024-07-24 19:10:24.058487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.893 [2024-07-24 19:10:24.067045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.893 [2024-07-24 19:10:24.067064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.893 [2024-07-24 19:10:24.075250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.893 [2024-07-24 19:10:24.075269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.893 [2024-07-24 19:10:24.083514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.893 [2024-07-24 19:10:24.083532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.893 [2024-07-24 19:10:24.092025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.893 [2024-07-24 19:10:24.092043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.893 [2024-07-24 19:10:24.100708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.893 [2024-07-24 19:10:24.100731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.893 [2024-07-24 19:10:24.109354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.893 [2024-07-24 19:10:24.109373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.893 [2024-07-24 19:10:24.118077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.893 [2024-07-24 19:10:24.118096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.893 [2024-07-24 19:10:24.125973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.893 [2024-07-24 19:10:24.125991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.135055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.135074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.143884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.143903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.152635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.152653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.160842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.160862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.167586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.167606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.178437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.178458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.187010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.187029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.194121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.194140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.204048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.204067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.212653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.212672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.221590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.221613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.230722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.230740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.239652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.239670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.248358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.248376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.257182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.257200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.265282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.265300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.273782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.273800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.282240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.282258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.291160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.291179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.299680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.299699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.308659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.308677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.317497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.317515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.326441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.326459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.335195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.335213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.343888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.343907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.352728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.352746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.362042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.362060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.370894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.370913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.379730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.379748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.153 [2024-07-24 19:10:24.387982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.153 [2024-07-24 19:10:24.388006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.396436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.396453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.405053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.405072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.413783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.413802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.422080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.422099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.430478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.430497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.439300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.439320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.448435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.448453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.457272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.457292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.466278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.466298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.475037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.475056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.484055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.484074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.493085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.493105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.501321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.501341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.509881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.509899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.518463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.518481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.527314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.527333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.536321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.536340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.545129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.545148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.553801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.553824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.562632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.562651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.571440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.571459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.580391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.580410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.588750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.588769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.597389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.597408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.606084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.606102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.614875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.614894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.623562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.623580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.632090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.632109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.640967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.640986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.413 [2024-07-24 19:10:24.649682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.413 [2024-07-24 19:10:24.649702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.657760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.657779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.666485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.666504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.675550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.675569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.684314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.684333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.693287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.693307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.702119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.702138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.710889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.710909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.719632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.719652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.728518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.728537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.736588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.736607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.745032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.745052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.753538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.753557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.762106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.762124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.770040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.770059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.778849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.778868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.787803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.787832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.796734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.796753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.805480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.805499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.814124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.814143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.823165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.823184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.829989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.830008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.839945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.839965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.848642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.848661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.857450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.857469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.866286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.866305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.874183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.874202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.883316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.883334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.892018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.892037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.900580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.900599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.673 [2024-07-24 19:10:24.909657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.673 [2024-07-24 19:10:24.909676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:24.917876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:24.917894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:24.926348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:24.926367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:24.935005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:24.935024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:24.943211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:24.943230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:24.952333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:24.952352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:24.960563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:24.960582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:24.969235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:24.969253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:24.978515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:24.978534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:24.986767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:24.986786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:24.994886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:24.994904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:25.003652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:25.003670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:25.012478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:25.012496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:25.061279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:25.061296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:25.069865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:25.069884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:25.076667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:25.076685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:25.087121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:25.087140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:25.095302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:25.095321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.933 [2024-07-24 19:10:25.103899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.933 [2024-07-24 19:10:25.103917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.934 [2024-07-24 19:10:25.112124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.934 [2024-07-24 19:10:25.112142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.934 [2024-07-24 19:10:25.120521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.934 [2024-07-24 19:10:25.120539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.934 [2024-07-24 19:10:25.128677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.934 [2024-07-24 19:10:25.128696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.934 [2024-07-24 19:10:25.137309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.934 [2024-07-24 19:10:25.137326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.934 [2024-07-24 19:10:25.145696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.934 [2024-07-24 19:10:25.145719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.934 [2024-07-24 19:10:25.154014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.934 [2024-07-24 19:10:25.154033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.934 [2024-07-24 19:10:25.162150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.934 [2024-07-24 19:10:25.162169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.934 [2024-07-24 19:10:25.170379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.934 [2024-07-24 19:10:25.170397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.184760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.184780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.195793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.195813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.209598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.209618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.223379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.223398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.236967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.236986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.250684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.250703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.262489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.262507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.276091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.276110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.290269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.290287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.305276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.305295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.320074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.320092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.335250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.335270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.349214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.349233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.362470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.362488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.376391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.376410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.390104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.193 [2024-07-24 19:10:25.390123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.193 [2024-07-24 19:10:25.405360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.194 [2024-07-24 19:10:25.405379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.194 [2024-07-24 19:10:25.419338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.194 [2024-07-24 19:10:25.419357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.194 [2024-07-24 19:10:25.430958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.194 [2024-07-24 19:10:25.430977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.444598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.444618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.458205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.458224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.472601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.472620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.486681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.486700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.497256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.497276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.512085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.512104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.527564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.527583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.541471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.541494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.555149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.555167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.568558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.568577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.582136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.582156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.595820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.595839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.609208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.609226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.622862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.622881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.636221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.636240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.649387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.649406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.662966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.662984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.676677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.676696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.464 [2024-07-24 19:10:25.690704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.464 [2024-07-24 19:10:25.690729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.704523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.704542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.717941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.717961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.731339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.731359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.747895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.747914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.761345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.761364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.774993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.775011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.788201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.788220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.801555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.801578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.815092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.815111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.828430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.828449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.841739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.841758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.856093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.856112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.869530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.869548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.883013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.883032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.896110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.896129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.909436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.909456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.923055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.923074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.936375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.936394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.949748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.949768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.725 [2024-07-24 19:10:25.963313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.725 [2024-07-24 19:10:25.963332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:25.976915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:25.976934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:25.990360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:25.990380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.003605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.003625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.016917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.016936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.030197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.030217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.043719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.043740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.057156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.057180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.070763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.070783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.084203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.084223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.097516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.097535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.111090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.111109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.124159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.124178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.137461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.137480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.151090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.151109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.162245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.162264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.176548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.176567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.189948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.189967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.203694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.203720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.984 [2024-07-24 19:10:26.214739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.984 [2024-07-24 19:10:26.214758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.228588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.228608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.242694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.242720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.258297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.258317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.271765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.271785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.286175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.286194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.299263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.299283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.312957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.312981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.326130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.326150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.339802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.339821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.352972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.352992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.366511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.366531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.380139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.380159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.393678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.393699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.407267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.407286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.421140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.242 [2024-07-24 19:10:26.421160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.242 [2024-07-24 19:10:26.432092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.243 [2024-07-24 19:10:26.432112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.243 [2024-07-24 19:10:26.445976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.243 [2024-07-24 19:10:26.445996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.243 [2024-07-24 19:10:26.459624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.243 [2024-07-24 19:10:26.459644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.243 [2024-07-24 19:10:26.473018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.243 [2024-07-24 19:10:26.473038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.486489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.486508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.499704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.499728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.513227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.513248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.526415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.526436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.539871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.539890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.553003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.553022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.566999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.567018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.580669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.580689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.594299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.594318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.605653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.605672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.619504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.619523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.632652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.632670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.646230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.646250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.659979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.659998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.670956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.670975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.684796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.684815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.698834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.698853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.710044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.710062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.723978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.723996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.501 [2024-07-24 19:10:26.737791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.501 [2024-07-24 19:10:26.737810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.753212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.753231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.766511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.766530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.780306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.780326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.794026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.794045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.805129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.805148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.818592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.818611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.832093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.832112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.845735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.845753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.859368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.859387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.873071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.873090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.886619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.886638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.900735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.900753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.915720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.915739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.929482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.929501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.943033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.943052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.956306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.956333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.969800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.969819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.760 [2024-07-24 19:10:26.984657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.760 [2024-07-24 19:10:26.984675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.018 [2024-07-24 19:10:26.999821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.018 [2024-07-24 19:10:26.999841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.018 [2024-07-24 19:10:27.013497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.018 [2024-07-24 19:10:27.013516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.018 [2024-07-24 19:10:27.026929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.018 [2024-07-24 19:10:27.026947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.018 [2024-07-24 19:10:27.040642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.018 [2024-07-24 19:10:27.040661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.018 [2024-07-24 19:10:27.054455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.018 [2024-07-24 19:10:27.054474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.018 [2024-07-24 19:10:27.067667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.018 [2024-07-24 19:10:27.067686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.018 [2024-07-24 19:10:27.081239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.018 [2024-07-24 19:10:27.081258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.018 [2024-07-24 19:10:27.095385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.018 [2024-07-24 19:10:27.095404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.018 [2024-07-24 19:10:27.106486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.018 [2024-07-24 19:10:27.106505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.018 [2024-07-24 19:10:27.120438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.018 [2024-07-24 19:10:27.120457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.018 [2024-07-24 19:10:27.133966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.018 [2024-07-24 19:10:27.133986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.018 00:09:41.018 Latency(us) 00:09:41.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.018 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:41.019 Nvme1n1 : 5.01 17238.73 134.68 0.00 0.00 7418.78 2398.62 50751.08 00:09:41.019 =================================================================================================================== 00:09:41.019 Total : 17238.73 134.68 0.00 0.00 7418.78 2398.62 50751.08 00:09:41.019 [2024-07-24 19:10:27.143469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.019 [2024-07-24 19:10:27.143487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.019 [2024-07-24 19:10:27.155498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.019 [2024-07-24 19:10:27.155513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.019 [2024-07-24 19:10:27.167541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.019 [2024-07-24 19:10:27.167557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.019 [2024-07-24 19:10:27.179566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.019 [2024-07-24 19:10:27.179583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.019 [2024-07-24 19:10:27.191598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.019 [2024-07-24 19:10:27.191611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.019 [2024-07-24 19:10:27.203632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.019 [2024-07-24 19:10:27.203650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.019 [2024-07-24 19:10:27.215658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.019 [2024-07-24 19:10:27.215672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.019 [2024-07-24 19:10:27.227690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.019 [2024-07-24 19:10:27.227704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.019 [2024-07-24 19:10:27.239725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.019 [2024-07-24 19:10:27.239741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.019 [2024-07-24 19:10:27.251753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.019 [2024-07-24 19:10:27.251766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.277 [2024-07-24 19:10:27.263801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.277 [2024-07-24 19:10:27.263820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.277 [2024-07-24 19:10:27.275813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.277 [2024-07-24 19:10:27.275825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.277 [2024-07-24 19:10:27.287854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.277 [2024-07-24 19:10:27.287865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.277 [2024-07-24 19:10:27.299888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.277 [2024-07-24 19:10:27.299901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.277 [2024-07-24 19:10:27.311918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.277 [2024-07-24 19:10:27.311929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1412241) - No such process 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1412241 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.277 delay0 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.277 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:41.277 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.277 [2024-07-24 19:10:27.406735] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:47.833 Initializing NVMe Controllers 00:09:47.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:47.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:47.833 Initialization complete. Launching workers. 00:09:47.833 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 106 00:09:47.833 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 393, failed to submit 33 00:09:47.833 success 194, unsuccess 199, failed 0 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:47.833 rmmod nvme_tcp 00:09:47.833 rmmod nvme_fabrics 00:09:47.833 rmmod nvme_keyring 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1410242 ']' 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1410242 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1410242 ']' 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1410242 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1410242 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1410242' 00:09:47.833 killing process with pid 1410242 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1410242 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1410242 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.833 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.834 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.366 19:10:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:50.366 00:09:50.366 real 0m32.555s 00:09:50.366 user 0m41.646s 00:09:50.366 sys 0m12.972s 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.366 ************************************ 00:09:50.366 END TEST nvmf_zcopy 00:09:50.366 ************************************ 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.366 ************************************ 00:09:50.366 START TEST nvmf_nmic 00:09:50.366 ************************************ 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:50.366 * Looking for test storage... 00:09:50.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.366 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:50.367 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:56.970 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.970 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:56.970 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:56.970 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:56.970 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:56.970 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:56.971 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:56.971 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:56.971 Found net devices under 0000:af:00.0: cvl_0_0 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:56.971 Found net devices under 0000:af:00.1: cvl_0_1 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.971 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:56.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:09:56.971 00:09:56.971 --- 10.0.0.2 ping statistics --- 00:09:56.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.972 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:09:56.972 00:09:56.972 --- 10.0.0.1 ping statistics --- 00:09:56.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.972 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1417906 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1417906 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1417906 ']' 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:56.972 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:56.972 [2024-07-24 19:10:42.740838] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:56.972 [2024-07-24 19:10:42.740886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.972 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.972 [2024-07-24 19:10:42.814916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.972 [2024-07-24 19:10:42.890846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.972 [2024-07-24 19:10:42.890898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.972 [2024-07-24 19:10:42.890912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.972 [2024-07-24 19:10:42.890924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.972 [2024-07-24 19:10:42.890934] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.972 [2024-07-24 19:10:42.890985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.972 [2024-07-24 19:10:42.891006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.972 [2024-07-24 19:10:42.891091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.972 [2024-07-24 19:10:42.891094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.550 [2024-07-24 19:10:43.590060] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.550 Malloc0 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.550 [2024-07-24 19:10:43.644848] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:57.550 test case1: single bdev can't be used in multiple subsystems 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.550 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.551 [2024-07-24 19:10:43.668749] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:57.551 [2024-07-24 19:10:43.668773] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:57.551 [2024-07-24 19:10:43.668787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.551 request: 00:09:57.551 { 00:09:57.551 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:57.551 "namespace": { 00:09:57.551 "bdev_name": "Malloc0", 00:09:57.551 "no_auto_visible": false 00:09:57.551 }, 00:09:57.551 "method": "nvmf_subsystem_add_ns", 00:09:57.551 "req_id": 1 00:09:57.551 } 00:09:57.551 Got JSON-RPC error response 00:09:57.551 response: 00:09:57.551 { 00:09:57.551 "code": -32602, 00:09:57.551 "message": "Invalid parameters" 00:09:57.551 } 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:57.551 Adding namespace failed - expected result. 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:57.551 test case2: host connect to nvmf target in multiple paths 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.551 [2024-07-24 19:10:43.684914] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.551 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:58.927 19:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:00.302 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.302 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:00.302 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.302 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:00.302 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:02.202 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:02.202 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:02.202 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.202 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:02.202 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.202 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:02.202 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:02.202 [global] 00:10:02.202 thread=1 00:10:02.202 invalidate=1 00:10:02.202 rw=write 00:10:02.202 time_based=1 00:10:02.202 runtime=1 00:10:02.202 ioengine=libaio 00:10:02.202 direct=1 00:10:02.202 bs=4096 00:10:02.202 iodepth=1 00:10:02.202 norandommap=0 00:10:02.202 numjobs=1 00:10:02.202 00:10:02.202 verify_dump=1 00:10:02.202 verify_backlog=512 00:10:02.202 verify_state_save=0 00:10:02.202 do_verify=1 00:10:02.202 verify=crc32c-intel 00:10:02.202 [job0] 00:10:02.202 filename=/dev/nvme0n1 00:10:02.202 Could not set queue depth (nvme0n1) 00:10:02.767 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.767 fio-3.35 00:10:02.767 Starting 1 thread 00:10:03.701 00:10:03.701 job0: (groupid=0, jobs=1): err= 0: pid=1419136: Wed Jul 24 19:10:49 2024 00:10:03.701 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:10:03.701 slat (nsec): min=11319, max=25714, avg=24590.86, stdev=3056.18 00:10:03.701 clat (usec): min=40794, max=41872, avg=41011.35, stdev=212.23 00:10:03.701 lat (usec): min=40819, max=41897, avg=41035.94, stdev=211.98 00:10:03.701 clat percentiles (usec): 00:10:03.701 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:03.701 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:03.701 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:03.701 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:03.701 | 99.99th=[41681] 00:10:03.701 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:10:03.701 slat (usec): min=9, max=24429, avg=60.41, stdev=1079.06 00:10:03.701 clat (usec): min=188, max=489, avg=219.11, stdev=27.39 00:10:03.701 lat (usec): min=204, max=24918, avg=279.52, stdev=1091.31 00:10:03.701 clat percentiles (usec): 00:10:03.701 | 1.00th=[ 194], 5.00th=[ 196], 10.00th=[ 196], 20.00th=[ 200], 00:10:03.701 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 221], 00:10:03.701 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 247], 95.00th=[ 273], 00:10:03.701 | 99.00th=[ 285], 99.50th=[ 326], 99.90th=[ 490], 99.95th=[ 490], 00:10:03.701 | 99.99th=[ 490] 00:10:03.701 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:03.701 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:03.701 lat (usec) : 250=87.43%, 500=8.63% 00:10:03.701 lat (msec) : 50=3.94% 00:10:03.701 cpu : usr=0.00%, sys=1.00%, ctx=536, majf=0, minf=2 00:10:03.701 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.701 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.701 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.701 00:10:03.701 Run status group 0 (all jobs): 00:10:03.701 READ: bw=83.5KiB/s (85.5kB/s), 83.5KiB/s-83.5KiB/s (85.5kB/s-85.5kB/s), io=84.0KiB (86.0kB), run=1006-1006msec 00:10:03.701 WRITE: bw=2036KiB/s (2085kB/s), 2036KiB/s-2036KiB/s (2085kB/s-2085kB/s), io=2048KiB (2097kB), run=1006-1006msec 00:10:03.701 00:10:03.701 Disk stats (read/write): 00:10:03.701 nvme0n1: ios=70/512, merge=0/0, ticks=1277/108, in_queue=1385, util=98.70% 00:10:03.701 19:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.960 rmmod nvme_tcp 00:10:03.960 rmmod nvme_fabrics 00:10:03.960 rmmod nvme_keyring 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1417906 ']' 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1417906 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1417906 ']' 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1417906 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1417906 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1417906' 00:10:03.960 killing process with pid 1417906 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1417906 00:10:03.960 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1417906 00:10:04.219 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:04.219 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:04.219 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:04.219 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.219 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:04.219 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.219 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.219 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:06.753 00:10:06.753 real 0m16.372s 00:10:06.753 user 0m39.529s 00:10:06.753 sys 0m6.017s 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.753 ************************************ 00:10:06.753 END TEST nvmf_nmic 00:10:06.753 ************************************ 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.753 ************************************ 00:10:06.753 START TEST nvmf_fio_target 00:10:06.753 ************************************ 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:06.753 * Looking for test storage... 00:10:06.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.753 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:06.754 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:13.322 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:13.322 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:13.322 Found net devices under 0000:af:00.0: cvl_0_0 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:13.322 Found net devices under 0000:af:00.1: cvl_0_1 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:13.322 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:13.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:10:13.323 00:10:13.323 --- 10.0.0.2 ping statistics --- 00:10:13.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.323 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:10:13.323 00:10:13.323 --- 10.0.0.1 ping statistics --- 00:10:13.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.323 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1423093 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1423093 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1423093 ']' 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.323 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.323 [2024-07-24 19:10:59.550677] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:13.323 [2024-07-24 19:10:59.550728] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.582 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.582 [2024-07-24 19:10:59.622968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.582 [2024-07-24 19:10:59.695791] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.582 [2024-07-24 19:10:59.695834] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.582 [2024-07-24 19:10:59.695848] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.582 [2024-07-24 19:10:59.695858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.582 [2024-07-24 19:10:59.695867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.582 [2024-07-24 19:10:59.695937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.582 [2024-07-24 19:10:59.696033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.582 [2024-07-24 19:10:59.696119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.582 [2024-07-24 19:10:59.696123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.149 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.149 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:14.149 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.149 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.149 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.407 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.407 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:14.407 [2024-07-24 19:11:00.565516] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.407 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.666 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:14.666 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.925 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:14.925 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.184 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:15.184 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.184 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:15.184 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:15.442 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.732 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:15.732 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.732 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:15.732 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.991 19:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:15.991 19:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:16.250 19:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.509 19:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:16.509 19:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.509 19:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:16.509 19:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.768 19:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.027 [2024-07-24 19:11:03.058429] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.027 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:17.286 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:17.286 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.664 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:18.664 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.664 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.664 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:18.664 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:18.664 19:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:20.568 19:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:20.568 19:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:20.568 19:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.568 19:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:20.568 19:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.568 19:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:20.568 19:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:20.568 [global] 00:10:20.568 thread=1 00:10:20.568 invalidate=1 00:10:20.568 rw=write 00:10:20.568 time_based=1 00:10:20.568 runtime=1 00:10:20.568 ioengine=libaio 00:10:20.568 direct=1 00:10:20.568 bs=4096 00:10:20.568 iodepth=1 00:10:20.568 norandommap=0 00:10:20.568 numjobs=1 00:10:20.568 00:10:20.568 verify_dump=1 00:10:20.568 verify_backlog=512 00:10:20.568 verify_state_save=0 00:10:20.568 do_verify=1 00:10:20.568 verify=crc32c-intel 00:10:20.826 [job0] 00:10:20.826 filename=/dev/nvme0n1 00:10:20.826 [job1] 00:10:20.826 filename=/dev/nvme0n2 00:10:20.826 [job2] 00:10:20.826 filename=/dev/nvme0n3 00:10:20.826 [job3] 00:10:20.826 filename=/dev/nvme0n4 00:10:20.826 Could not set queue depth (nvme0n1) 00:10:20.826 Could not set queue depth (nvme0n2) 00:10:20.826 Could not set queue depth (nvme0n3) 00:10:20.826 Could not set queue depth (nvme0n4) 00:10:21.083 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.083 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.083 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.083 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.083 fio-3.35 00:10:21.083 Starting 4 threads 00:10:22.458 00:10:22.458 job0: (groupid=0, jobs=1): err= 0: pid=1424635: Wed Jul 24 19:11:08 2024 00:10:22.458 read: IOPS=986, BW=3944KiB/s (4039kB/s)(3948KiB/1001msec) 00:10:22.458 slat (nsec): min=8942, max=24832, avg=9757.98, stdev=1563.55 00:10:22.458 clat (usec): min=311, max=41208, avg=739.39, stdev=3642.33 00:10:22.458 lat (usec): min=321, max=41218, avg=749.15, stdev=3642.34 00:10:22.458 clat percentiles (usec): 00:10:22.458 | 1.00th=[ 326], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 359], 00:10:22.458 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 424], 60.00th=[ 437], 00:10:22.458 | 70.00th=[ 445], 80.00th=[ 453], 90.00th=[ 478], 95.00th=[ 490], 00:10:22.458 | 99.00th=[ 570], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:22.458 | 99.99th=[41157] 00:10:22.458 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:22.458 slat (nsec): min=7596, max=55687, avg=13396.31, stdev=2338.32 00:10:22.458 clat (usec): min=180, max=3459, avg=235.34, stdev=150.45 00:10:22.458 lat (usec): min=193, max=3477, avg=248.73, stdev=150.65 00:10:22.458 clat percentiles (usec): 00:10:22.458 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:10:22.458 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:10:22.458 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 281], 00:10:22.458 | 99.00th=[ 322], 99.50th=[ 367], 99.90th=[ 3228], 99.95th=[ 3458], 00:10:22.458 | 99.99th=[ 3458] 00:10:22.458 bw ( KiB/s): min= 4096, max= 4096, per=20.42%, avg=4096.00, stdev= 0.00, samples=1 00:10:22.458 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:22.458 lat (usec) : 250=46.05%, 500=52.41%, 750=0.94% 00:10:22.458 lat (msec) : 2=0.10%, 4=0.10%, 50=0.40% 00:10:22.458 cpu : usr=2.30%, sys=3.20%, ctx=2012, majf=0, minf=2 00:10:22.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.458 issued rwts: total=987,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.458 job1: (groupid=0, jobs=1): err= 0: pid=1424636: Wed Jul 24 19:11:08 2024 00:10:22.458 read: IOPS=1518, BW=6074KiB/s (6220kB/s)(6080KiB/1001msec) 00:10:22.458 slat (nsec): min=8890, max=28425, avg=9824.51, stdev=1525.00 00:10:22.458 clat (usec): min=310, max=3275, avg=408.58, stdev=105.60 00:10:22.458 lat (usec): min=320, max=3287, avg=418.41, stdev=105.67 00:10:22.458 clat percentiles (usec): 00:10:22.458 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 355], 00:10:22.458 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 383], 00:10:22.458 | 70.00th=[ 420], 80.00th=[ 494], 90.00th=[ 506], 95.00th=[ 515], 00:10:22.458 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 1778], 99.95th=[ 3261], 00:10:22.458 | 99.99th=[ 3261] 00:10:22.458 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:22.458 slat (nsec): min=4385, max=40137, avg=11894.26, stdev=2864.67 00:10:22.458 clat (usec): min=175, max=416, avg=219.04, stdev=19.19 00:10:22.458 lat (usec): min=183, max=425, avg=230.94, stdev=20.27 00:10:22.458 clat percentiles (usec): 00:10:22.458 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:10:22.458 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:10:22.458 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 247], 00:10:22.458 | 99.00th=[ 265], 99.50th=[ 310], 99.90th=[ 388], 99.95th=[ 416], 00:10:22.458 | 99.99th=[ 416] 00:10:22.458 bw ( KiB/s): min= 8192, max= 8192, per=40.85%, avg=8192.00, stdev= 0.00, samples=1 00:10:22.458 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:22.458 lat (usec) : 250=48.36%, 500=43.88%, 750=7.62% 00:10:22.458 lat (msec) : 2=0.10%, 4=0.03% 00:10:22.458 cpu : usr=3.20%, sys=4.70%, ctx=3057, majf=0, minf=1 00:10:22.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.458 issued rwts: total=1520,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.458 job2: (groupid=0, jobs=1): err= 0: pid=1424637: Wed Jul 24 19:11:08 2024 00:10:22.458 read: IOPS=1007, BW=4031KiB/s (4128kB/s)(4120KiB/1022msec) 00:10:22.458 slat (nsec): min=8863, max=37446, avg=9847.10, stdev=1835.41 00:10:22.458 clat (usec): min=232, max=41515, avg=484.77, stdev=2213.72 00:10:22.458 lat (usec): min=249, max=41528, avg=494.62, stdev=2214.14 00:10:22.458 clat percentiles (usec): 00:10:22.458 | 1.00th=[ 247], 5.00th=[ 277], 10.00th=[ 302], 20.00th=[ 347], 00:10:22.458 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 367], 00:10:22.458 | 70.00th=[ 371], 80.00th=[ 375], 90.00th=[ 383], 95.00th=[ 396], 00:10:22.458 | 99.00th=[ 515], 99.50th=[ 611], 99.90th=[41157], 99.95th=[41681], 00:10:22.458 | 99.99th=[41681] 00:10:22.458 write: IOPS=1502, BW=6012KiB/s (6156kB/s)(6144KiB/1022msec); 0 zone resets 00:10:22.458 slat (usec): min=11, max=40712, avg=55.44, stdev=1199.26 00:10:22.458 clat (usec): min=193, max=690, avg=274.20, stdev=56.19 00:10:22.458 lat (usec): min=206, max=41077, avg=329.65, stdev=1203.75 00:10:22.458 clat percentiles (usec): 00:10:22.458 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 227], 00:10:22.458 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 265], 00:10:22.458 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 351], 95.00th=[ 355], 00:10:22.458 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 494], 99.95th=[ 693], 00:10:22.458 | 99.99th=[ 693] 00:10:22.458 bw ( KiB/s): min= 6024, max= 6264, per=30.64%, avg=6144.00, stdev=169.71, samples=2 00:10:22.458 iops : min= 1506, max= 1566, avg=1536.00, stdev=42.43, samples=2 00:10:22.458 lat (usec) : 250=32.89%, 500=66.41%, 750=0.51%, 1000=0.04% 00:10:22.458 lat (msec) : 10=0.04%, 50=0.12% 00:10:22.459 cpu : usr=1.27%, sys=3.53%, ctx=2569, majf=0, minf=1 00:10:22.459 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.459 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.459 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.459 job3: (groupid=0, jobs=1): err= 0: pid=1424638: Wed Jul 24 19:11:08 2024 00:10:22.459 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:22.459 slat (nsec): min=9014, max=27723, avg=9960.58, stdev=1923.29 00:10:22.459 clat (usec): min=246, max=42421, avg=701.39, stdev=3625.08 00:10:22.459 lat (usec): min=255, max=42431, avg=711.35, stdev=3625.30 00:10:22.459 clat percentiles (usec): 00:10:22.459 | 1.00th=[ 297], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:10:22.459 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[ 371], 60.00th=[ 379], 00:10:22.459 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 429], 95.00th=[ 482], 00:10:22.459 | 99.00th=[ 594], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:22.459 | 99.99th=[42206] 00:10:22.459 write: IOPS=1026, BW=4108KiB/s (4206kB/s)(4112KiB/1001msec); 0 zone resets 00:10:22.459 slat (usec): min=4, max=2087, avg=15.92, stdev=68.57 00:10:22.459 clat (usec): min=157, max=1556, avg=243.46, stdev=72.76 00:10:22.459 lat (usec): min=170, max=2597, avg=259.38, stdev=106.16 00:10:22.459 clat percentiles (usec): 00:10:22.459 | 1.00th=[ 182], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:10:22.459 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:10:22.459 | 70.00th=[ 241], 80.00th=[ 255], 90.00th=[ 318], 95.00th=[ 347], 00:10:22.459 | 99.00th=[ 400], 99.50th=[ 474], 99.90th=[ 1352], 99.95th=[ 1549], 00:10:22.459 | 99.99th=[ 1549] 00:10:22.459 bw ( KiB/s): min= 4096, max= 4096, per=20.42%, avg=4096.00, stdev= 0.00, samples=1 00:10:22.459 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:22.459 lat (usec) : 250=39.28%, 500=59.36%, 750=0.88% 00:10:22.459 lat (msec) : 2=0.10%, 50=0.39% 00:10:22.459 cpu : usr=1.10%, sys=2.70%, ctx=2056, majf=0, minf=1 00:10:22.459 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.459 issued rwts: total=1024,1028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.459 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.459 00:10:22.459 Run status group 0 (all jobs): 00:10:22.459 READ: bw=17.4MiB/s (18.3MB/s), 3944KiB/s-6074KiB/s (4039kB/s-6220kB/s), io=17.8MiB (18.7MB), run=1001-1022msec 00:10:22.459 WRITE: bw=19.6MiB/s (20.5MB/s), 4092KiB/s-6138KiB/s (4190kB/s-6285kB/s), io=20.0MiB (21.0MB), run=1001-1022msec 00:10:22.459 00:10:22.459 Disk stats (read/write): 00:10:22.459 nvme0n1: ios=609/1024, merge=0/0, ticks=1367/229, in_queue=1596, util=85.27% 00:10:22.459 nvme0n2: ios=1176/1536, merge=0/0, ticks=703/319, in_queue=1022, util=86.49% 00:10:22.459 nvme0n3: ios=1046/1190, merge=0/0, ticks=1226/337, in_queue=1563, util=94.79% 00:10:22.459 nvme0n4: ios=588/1024, merge=0/0, ticks=735/237, in_queue=972, util=99.34% 00:10:22.459 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:22.459 [global] 00:10:22.459 thread=1 00:10:22.459 invalidate=1 00:10:22.459 rw=randwrite 00:10:22.459 time_based=1 00:10:22.459 runtime=1 00:10:22.459 ioengine=libaio 00:10:22.459 direct=1 00:10:22.459 bs=4096 00:10:22.459 iodepth=1 00:10:22.459 norandommap=0 00:10:22.459 numjobs=1 00:10:22.459 00:10:22.459 verify_dump=1 00:10:22.459 verify_backlog=512 00:10:22.459 verify_state_save=0 00:10:22.459 do_verify=1 00:10:22.459 verify=crc32c-intel 00:10:22.459 [job0] 00:10:22.459 filename=/dev/nvme0n1 00:10:22.459 [job1] 00:10:22.459 filename=/dev/nvme0n2 00:10:22.459 [job2] 00:10:22.459 filename=/dev/nvme0n3 00:10:22.459 [job3] 00:10:22.459 filename=/dev/nvme0n4 00:10:22.459 Could not set queue depth (nvme0n1) 00:10:22.459 Could not set queue depth (nvme0n2) 00:10:22.459 Could not set queue depth (nvme0n3) 00:10:22.459 Could not set queue depth (nvme0n4) 00:10:22.717 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.717 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.717 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.717 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.717 fio-3.35 00:10:22.717 Starting 4 threads 00:10:24.093 00:10:24.093 job0: (groupid=0, jobs=1): err= 0: pid=1425058: Wed Jul 24 19:11:10 2024 00:10:24.093 read: IOPS=21, BW=85.0KiB/s (87.1kB/s)(88.0KiB/1035msec) 00:10:24.093 slat (nsec): min=11496, max=26525, avg=18564.09, stdev=5667.23 00:10:24.093 clat (usec): min=40834, max=41506, avg=41007.20, stdev=139.40 00:10:24.093 lat (usec): min=40847, max=41519, avg=41025.76, stdev=137.05 00:10:24.093 clat percentiles (usec): 00:10:24.093 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:24.093 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:24.093 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:24.093 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:24.093 | 99.99th=[41681] 00:10:24.093 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:10:24.093 slat (nsec): min=11866, max=43886, avg=13357.42, stdev=2013.67 00:10:24.093 clat (usec): min=205, max=433, avg=241.11, stdev=25.64 00:10:24.093 lat (usec): min=218, max=477, avg=254.47, stdev=26.05 00:10:24.093 clat percentiles (usec): 00:10:24.093 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 223], 00:10:24.093 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 243], 00:10:24.093 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 285], 00:10:24.093 | 99.00th=[ 330], 99.50th=[ 371], 99.90th=[ 433], 99.95th=[ 433], 00:10:24.093 | 99.99th=[ 433] 00:10:24.093 bw ( KiB/s): min= 4096, max= 4096, per=30.08%, avg=4096.00, stdev= 0.00, samples=1 00:10:24.093 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:24.093 lat (usec) : 250=67.42%, 500=28.46% 00:10:24.093 lat (msec) : 50=4.12% 00:10:24.093 cpu : usr=0.68%, sys=0.77%, ctx=534, majf=0, minf=1 00:10:24.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.093 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.093 job1: (groupid=0, jobs=1): err= 0: pid=1425059: Wed Jul 24 19:11:10 2024 00:10:24.093 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:24.093 slat (nsec): min=8674, max=34083, avg=9319.79, stdev=1202.90 00:10:24.093 clat (usec): min=279, max=423, avg=334.75, stdev=14.72 00:10:24.093 lat (usec): min=288, max=432, avg=344.07, stdev=14.77 00:10:24.093 clat percentiles (usec): 00:10:24.093 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:10:24.093 | 30.00th=[ 330], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:10:24.093 | 70.00th=[ 343], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 359], 00:10:24.093 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 408], 99.95th=[ 424], 00:10:24.093 | 99.99th=[ 424] 00:10:24.093 write: IOPS=1985, BW=7940KiB/s (8131kB/s)(7948KiB/1001msec); 0 zone resets 00:10:24.093 slat (nsec): min=8180, max=39885, avg=12419.96, stdev=1723.39 00:10:24.093 clat (usec): min=171, max=442, avg=221.28, stdev=32.16 00:10:24.093 lat (usec): min=183, max=475, avg=233.70, stdev=32.39 00:10:24.093 clat percentiles (usec): 00:10:24.093 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:10:24.093 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 217], 60.00th=[ 225], 00:10:24.093 | 70.00th=[ 231], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 277], 00:10:24.093 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 400], 99.95th=[ 441], 00:10:24.093 | 99.99th=[ 441] 00:10:24.093 bw ( KiB/s): min= 8192, max= 8192, per=60.17%, avg=8192.00, stdev= 0.00, samples=1 00:10:24.093 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:24.093 lat (usec) : 250=43.51%, 500=56.49% 00:10:24.093 cpu : usr=2.30%, sys=4.00%, ctx=3526, majf=0, minf=2 00:10:24.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.093 issued rwts: total=1536,1987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.093 job2: (groupid=0, jobs=1): err= 0: pid=1425060: Wed Jul 24 19:11:10 2024 00:10:24.093 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:10:24.093 slat (nsec): min=11942, max=25052, avg=23587.45, stdev=2643.37 00:10:24.093 clat (usec): min=40895, max=41247, avg=40977.70, stdev=70.81 00:10:24.093 lat (usec): min=40920, max=41258, avg=41001.28, stdev=68.48 00:10:24.093 clat percentiles (usec): 00:10:24.093 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:24.093 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:24.093 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:24.093 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:24.093 | 99.99th=[41157] 00:10:24.093 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:10:24.093 slat (nsec): min=11858, max=48217, avg=13108.73, stdev=2658.79 00:10:24.093 clat (usec): min=201, max=452, avg=232.48, stdev=19.81 00:10:24.093 lat (usec): min=213, max=492, avg=245.59, stdev=20.75 00:10:24.093 clat percentiles (usec): 00:10:24.093 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 219], 00:10:24.093 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 233], 00:10:24.093 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 265], 00:10:24.093 | 99.00th=[ 297], 99.50th=[ 326], 99.90th=[ 453], 99.95th=[ 453], 00:10:24.093 | 99.99th=[ 453] 00:10:24.093 bw ( KiB/s): min= 4096, max= 4096, per=30.08%, avg=4096.00, stdev= 0.00, samples=1 00:10:24.093 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:24.093 lat (usec) : 250=87.08%, 500=8.80% 00:10:24.093 lat (msec) : 50=4.12% 00:10:24.093 cpu : usr=0.58%, sys=0.87%, ctx=534, majf=0, minf=1 00:10:24.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.093 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.093 job3: (groupid=0, jobs=1): err= 0: pid=1425061: Wed Jul 24 19:11:10 2024 00:10:24.093 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:10:24.093 slat (nsec): min=11162, max=25820, avg=24737.55, stdev=3047.22 00:10:24.093 clat (usec): min=40841, max=41984, avg=41083.45, stdev=321.55 00:10:24.093 lat (usec): min=40866, max=42010, avg=41108.18, stdev=320.53 00:10:24.093 clat percentiles (usec): 00:10:24.093 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:24.093 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:24.093 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:24.093 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:24.093 | 99.99th=[42206] 00:10:24.093 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:10:24.093 slat (nsec): min=11304, max=39649, avg=12843.32, stdev=1990.43 00:10:24.093 clat (usec): min=193, max=483, avg=234.51, stdev=24.21 00:10:24.093 lat (usec): min=208, max=522, avg=247.35, stdev=24.71 00:10:24.093 clat percentiles (usec): 00:10:24.093 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:10:24.094 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 235], 00:10:24.094 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 273], 00:10:24.094 | 99.00th=[ 318], 99.50th=[ 355], 99.90th=[ 482], 99.95th=[ 482], 00:10:24.094 | 99.99th=[ 482] 00:10:24.094 bw ( KiB/s): min= 4096, max= 4096, per=30.08%, avg=4096.00, stdev= 0.00, samples=1 00:10:24.094 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:24.094 lat (usec) : 250=79.40%, 500=16.48% 00:10:24.094 lat (msec) : 50=4.12% 00:10:24.094 cpu : usr=0.10%, sys=0.87%, ctx=534, majf=0, minf=1 00:10:24.094 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.094 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.094 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.094 00:10:24.094 Run status group 0 (all jobs): 00:10:24.094 READ: bw=6191KiB/s (6340kB/s), 85.0KiB/s-6138KiB/s (87.1kB/s-6285kB/s), io=6408KiB (6562kB), run=1001-1035msec 00:10:24.094 WRITE: bw=13.3MiB/s (13.9MB/s), 1979KiB/s-7940KiB/s (2026kB/s-8131kB/s), io=13.8MiB (14.4MB), run=1001-1035msec 00:10:24.094 00:10:24.094 Disk stats (read/write): 00:10:24.094 nvme0n1: ios=66/512, merge=0/0, ticks=680/119, in_queue=799, util=84.67% 00:10:24.094 nvme0n2: ios=1334/1536, merge=0/0, ticks=1424/347, in_queue=1771, util=100.00% 00:10:24.094 nvme0n3: ios=17/512, merge=0/0, ticks=697/110, in_queue=807, util=88.18% 00:10:24.094 nvme0n4: ios=17/512, merge=0/0, ticks=700/121, in_queue=821, util=89.32% 00:10:24.094 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:24.094 [global] 00:10:24.094 thread=1 00:10:24.094 invalidate=1 00:10:24.094 rw=write 00:10:24.094 time_based=1 00:10:24.094 runtime=1 00:10:24.094 ioengine=libaio 00:10:24.094 direct=1 00:10:24.094 bs=4096 00:10:24.094 iodepth=128 00:10:24.094 norandommap=0 00:10:24.094 numjobs=1 00:10:24.094 00:10:24.094 verify_dump=1 00:10:24.094 verify_backlog=512 00:10:24.094 verify_state_save=0 00:10:24.094 do_verify=1 00:10:24.094 verify=crc32c-intel 00:10:24.094 [job0] 00:10:24.094 filename=/dev/nvme0n1 00:10:24.094 [job1] 00:10:24.094 filename=/dev/nvme0n2 00:10:24.094 [job2] 00:10:24.094 filename=/dev/nvme0n3 00:10:24.094 [job3] 00:10:24.094 filename=/dev/nvme0n4 00:10:24.094 Could not set queue depth (nvme0n1) 00:10:24.094 Could not set queue depth (nvme0n2) 00:10:24.094 Could not set queue depth (nvme0n3) 00:10:24.094 Could not set queue depth (nvme0n4) 00:10:24.352 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.352 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.352 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.352 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.352 fio-3.35 00:10:24.352 Starting 4 threads 00:10:25.729 00:10:25.729 job0: (groupid=0, jobs=1): err= 0: pid=1425482: Wed Jul 24 19:11:11 2024 00:10:25.729 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:10:25.729 slat (nsec): min=1755, max=11561k, avg=103429.08, stdev=667061.00 00:10:25.729 clat (usec): min=5967, max=50178, avg=13526.05, stdev=4739.03 00:10:25.729 lat (usec): min=5974, max=60173, avg=13629.48, stdev=4797.53 00:10:25.729 clat percentiles (usec): 00:10:25.729 | 1.00th=[ 6128], 5.00th=[ 7701], 10.00th=[ 8979], 20.00th=[ 9896], 00:10:25.729 | 30.00th=[10421], 40.00th=[11207], 50.00th=[12387], 60.00th=[13304], 00:10:25.729 | 70.00th=[15533], 80.00th=[17957], 90.00th=[19530], 95.00th=[22676], 00:10:25.729 | 99.00th=[24773], 99.50th=[26608], 99.90th=[50070], 99.95th=[50070], 00:10:25.729 | 99.99th=[50070] 00:10:25.729 write: IOPS=4869, BW=19.0MiB/s (19.9MB/s)(19.2MiB/1009msec); 0 zone resets 00:10:25.729 slat (usec): min=2, max=22238, avg=98.37, stdev=645.29 00:10:25.729 clat (usec): min=1716, max=33536, avg=13242.78, stdev=5324.26 00:10:25.729 lat (usec): min=1729, max=33544, avg=13341.14, stdev=5353.45 00:10:25.729 clat percentiles (usec): 00:10:25.729 | 1.00th=[ 3654], 5.00th=[ 6456], 10.00th=[ 8586], 20.00th=[ 9372], 00:10:25.729 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11600], 60.00th=[12649], 00:10:25.729 | 70.00th=[15664], 80.00th=[17957], 90.00th=[19006], 95.00th=[23987], 00:10:25.729 | 99.00th=[29754], 99.50th=[33162], 99.90th=[33424], 99.95th=[33424], 00:10:25.729 | 99.99th=[33424] 00:10:25.729 bw ( KiB/s): min=15848, max=22440, per=26.22%, avg=19144.00, stdev=4661.25, samples=2 00:10:25.729 iops : min= 3962, max= 5610, avg=4786.00, stdev=1165.31, samples=2 00:10:25.729 lat (msec) : 2=0.13%, 4=0.40%, 10=26.80%, 20=65.36%, 50=7.25% 00:10:25.729 lat (msec) : 100=0.06% 00:10:25.729 cpu : usr=4.86%, sys=6.55%, ctx=429, majf=0, minf=1 00:10:25.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:25.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.729 issued rwts: total=4608,4913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.729 job1: (groupid=0, jobs=1): err= 0: pid=1425483: Wed Jul 24 19:11:11 2024 00:10:25.729 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:10:25.729 slat (usec): min=2, max=10822, avg=110.72, stdev=746.35 00:10:25.729 clat (usec): min=5010, max=59780, avg=14938.90, stdev=5481.09 00:10:25.729 lat (usec): min=5014, max=68866, avg=15049.62, stdev=5535.90 00:10:25.729 clat percentiles (usec): 00:10:25.729 | 1.00th=[ 5211], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10814], 00:10:25.729 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13698], 60.00th=[15795], 00:10:25.729 | 70.00th=[17433], 80.00th=[18744], 90.00th=[20317], 95.00th=[22152], 00:10:25.729 | 99.00th=[27657], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:10:25.729 | 99.99th=[60031] 00:10:25.730 write: IOPS=4567, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:10:25.730 slat (usec): min=2, max=13449, avg=107.47, stdev=658.97 00:10:25.730 clat (usec): min=1641, max=47257, avg=14401.05, stdev=8403.58 00:10:25.730 lat (usec): min=1659, max=47264, avg=14508.52, stdev=8468.59 00:10:25.730 clat percentiles (usec): 00:10:25.730 | 1.00th=[ 4555], 5.00th=[ 5932], 10.00th=[ 6783], 20.00th=[ 8356], 00:10:25.730 | 30.00th=[ 9503], 40.00th=[10683], 50.00th=[11863], 60.00th=[14091], 00:10:25.730 | 70.00th=[15795], 80.00th=[18744], 90.00th=[22676], 95.00th=[34341], 00:10:25.730 | 99.00th=[44303], 99.50th=[46400], 99.90th=[47449], 99.95th=[47449], 00:10:25.730 | 99.99th=[47449] 00:10:25.730 bw ( KiB/s): min=12616, max=23152, per=24.49%, avg=17884.00, stdev=7450.08, samples=2 00:10:25.730 iops : min= 3154, max= 5788, avg=4471.00, stdev=1862.52, samples=2 00:10:25.730 lat (msec) : 2=0.09%, 4=0.26%, 10=22.84%, 20=64.50%, 50=12.05% 00:10:25.730 lat (msec) : 100=0.25% 00:10:25.730 cpu : usr=4.67%, sys=6.06%, ctx=306, majf=0, minf=1 00:10:25.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:25.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.730 issued rwts: total=4096,4599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.730 job2: (groupid=0, jobs=1): err= 0: pid=1425484: Wed Jul 24 19:11:11 2024 00:10:25.730 read: IOPS=3748, BW=14.6MiB/s (15.4MB/s)(15.0MiB/1026msec) 00:10:25.730 slat (usec): min=2, max=13435, avg=101.92, stdev=735.31 00:10:25.730 clat (usec): min=4991, max=46164, avg=13555.61, stdev=5360.63 00:10:25.730 lat (usec): min=5000, max=46171, avg=13657.53, stdev=5399.33 00:10:25.730 clat percentiles (usec): 00:10:25.730 | 1.00th=[ 7570], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10421], 00:10:25.730 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[12125], 00:10:25.730 | 70.00th=[13829], 80.00th=[16188], 90.00th=[18744], 95.00th=[26346], 00:10:25.730 | 99.00th=[36439], 99.50th=[36963], 99.90th=[46400], 99.95th=[46400], 00:10:25.730 | 99.99th=[46400] 00:10:25.730 write: IOPS=3992, BW=15.6MiB/s (16.4MB/s)(16.0MiB/1026msec); 0 zone resets 00:10:25.730 slat (usec): min=2, max=16382, avg=139.83, stdev=810.14 00:10:25.730 clat (usec): min=1876, max=98050, avg=18457.30, stdev=19963.47 00:10:25.730 lat (usec): min=1890, max=98064, avg=18597.14, stdev=20100.88 00:10:25.730 clat percentiles (usec): 00:10:25.730 | 1.00th=[ 3851], 5.00th=[ 5473], 10.00th=[ 7111], 20.00th=[ 8979], 00:10:25.730 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11207], 60.00th=[11338], 00:10:25.730 | 70.00th=[12518], 80.00th=[17957], 90.00th=[49546], 95.00th=[70779], 00:10:25.730 | 99.00th=[92799], 99.50th=[94897], 99.90th=[98042], 99.95th=[98042], 00:10:25.730 | 99.99th=[98042] 00:10:25.730 bw ( KiB/s): min= 9968, max=22800, per=22.44%, avg=16384.00, stdev=9073.59, samples=2 00:10:25.730 iops : min= 2492, max= 5700, avg=4096.00, stdev=2268.40, samples=2 00:10:25.730 lat (msec) : 2=0.10%, 4=0.54%, 10=21.37%, 20=65.59%, 50=7.27% 00:10:25.730 lat (msec) : 100=5.14% 00:10:25.730 cpu : usr=4.29%, sys=4.88%, ctx=491, majf=0, minf=1 00:10:25.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:25.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.730 issued rwts: total=3846,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.730 job3: (groupid=0, jobs=1): err= 0: pid=1425485: Wed Jul 24 19:11:11 2024 00:10:25.730 read: IOPS=4930, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1004msec) 00:10:25.730 slat (usec): min=2, max=10456, avg=102.63, stdev=591.50 00:10:25.730 clat (usec): min=2840, max=34501, avg=13340.97, stdev=3165.79 00:10:25.730 lat (usec): min=2847, max=34524, avg=13443.61, stdev=3196.04 00:10:25.730 clat percentiles (usec): 00:10:25.730 | 1.00th=[ 7767], 5.00th=[10159], 10.00th=[10945], 20.00th=[11469], 00:10:25.730 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12649], 60.00th=[13042], 00:10:25.730 | 70.00th=[13566], 80.00th=[14353], 90.00th=[16450], 95.00th=[20055], 00:10:25.730 | 99.00th=[24773], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:10:25.730 | 99.99th=[34341] 00:10:25.730 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:10:25.730 slat (usec): min=2, max=11056, avg=88.12, stdev=502.90 00:10:25.730 clat (usec): min=1885, max=21899, avg=11956.33, stdev=2126.46 00:10:25.730 lat (usec): min=1901, max=21904, avg=12044.45, stdev=2143.39 00:10:25.730 clat percentiles (usec): 00:10:25.730 | 1.00th=[ 5407], 5.00th=[ 8029], 10.00th=[ 9503], 20.00th=[10814], 00:10:25.730 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12387], 00:10:25.730 | 70.00th=[12649], 80.00th=[13304], 90.00th=[13960], 95.00th=[15008], 00:10:25.730 | 99.00th=[17695], 99.50th=[17957], 99.90th=[19530], 99.95th=[19792], 00:10:25.730 | 99.99th=[21890] 00:10:25.730 bw ( KiB/s): min=20480, max=20480, per=28.05%, avg=20480.00, stdev= 0.00, samples=2 00:10:25.730 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:25.730 lat (msec) : 2=0.11%, 4=0.21%, 10=8.22%, 20=88.98%, 50=2.48% 00:10:25.730 cpu : usr=4.59%, sys=7.18%, ctx=497, majf=0, minf=1 00:10:25.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:25.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.730 issued rwts: total=4950,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.730 00:10:25.730 Run status group 0 (all jobs): 00:10:25.730 READ: bw=66.6MiB/s (69.9MB/s), 14.6MiB/s-19.3MiB/s (15.4MB/s-20.2MB/s), io=68.4MiB (71.7MB), run=1004-1026msec 00:10:25.730 WRITE: bw=71.3MiB/s (74.8MB/s), 15.6MiB/s-19.9MiB/s (16.4MB/s-20.9MB/s), io=73.2MiB (76.7MB), run=1004-1026msec 00:10:25.730 00:10:25.730 Disk stats (read/write): 00:10:25.730 nvme0n1: ios=3640/4096, merge=0/0, ticks=26890/29417, in_queue=56307, util=99.00% 00:10:25.730 nvme0n2: ios=3787/4096, merge=0/0, ticks=37868/34585, in_queue=72453, util=99.49% 00:10:25.730 nvme0n3: ios=2721/3072, merge=0/0, ticks=36344/64079, in_queue=100423, util=96.81% 00:10:25.730 nvme0n4: ios=4096/4392, merge=0/0, ticks=22723/22396, in_queue=45119, util=89.00% 00:10:25.730 19:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:25.730 [global] 00:10:25.730 thread=1 00:10:25.730 invalidate=1 00:10:25.730 rw=randwrite 00:10:25.730 time_based=1 00:10:25.730 runtime=1 00:10:25.730 ioengine=libaio 00:10:25.730 direct=1 00:10:25.730 bs=4096 00:10:25.730 iodepth=128 00:10:25.730 norandommap=0 00:10:25.730 numjobs=1 00:10:25.730 00:10:25.730 verify_dump=1 00:10:25.730 verify_backlog=512 00:10:25.730 verify_state_save=0 00:10:25.730 do_verify=1 00:10:25.730 verify=crc32c-intel 00:10:25.730 [job0] 00:10:25.730 filename=/dev/nvme0n1 00:10:25.730 [job1] 00:10:25.730 filename=/dev/nvme0n2 00:10:25.730 [job2] 00:10:25.730 filename=/dev/nvme0n3 00:10:25.730 [job3] 00:10:25.730 filename=/dev/nvme0n4 00:10:25.730 Could not set queue depth (nvme0n1) 00:10:25.730 Could not set queue depth (nvme0n2) 00:10:25.730 Could not set queue depth (nvme0n3) 00:10:25.730 Could not set queue depth (nvme0n4) 00:10:25.988 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.988 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.988 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.988 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.988 fio-3.35 00:10:25.988 Starting 4 threads 00:10:27.367 00:10:27.367 job0: (groupid=0, jobs=1): err= 0: pid=1425904: Wed Jul 24 19:11:13 2024 00:10:27.367 read: IOPS=3424, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1006msec) 00:10:27.367 slat (nsec): min=1719, max=14811k, avg=140478.48, stdev=915044.33 00:10:27.367 clat (usec): min=4683, max=55445, avg=18917.24, stdev=11660.18 00:10:27.367 lat (usec): min=5455, max=55473, avg=19057.72, stdev=11719.21 00:10:27.367 clat percentiles (usec): 00:10:27.367 | 1.00th=[ 7308], 5.00th=[ 8586], 10.00th=[ 9765], 20.00th=[10421], 00:10:27.367 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12649], 60.00th=[16319], 00:10:27.367 | 70.00th=[20317], 80.00th=[29230], 90.00th=[36963], 95.00th=[45351], 00:10:27.367 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:10:27.367 | 99.99th=[55313] 00:10:27.367 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:10:27.367 slat (usec): min=2, max=14913, avg=137.34, stdev=772.63 00:10:27.367 clat (usec): min=5260, max=58979, avg=17200.98, stdev=10459.82 00:10:27.367 lat (usec): min=5265, max=58990, avg=17338.31, stdev=10548.83 00:10:27.367 clat percentiles (usec): 00:10:27.367 | 1.00th=[ 6718], 5.00th=[ 7635], 10.00th=[ 8291], 20.00th=[ 9241], 00:10:27.367 | 30.00th=[10159], 40.00th=[11469], 50.00th=[11994], 60.00th=[15533], 00:10:27.367 | 70.00th=[20841], 80.00th=[22938], 90.00th=[33817], 95.00th=[40109], 00:10:27.367 | 99.00th=[49021], 99.50th=[54264], 99.90th=[58983], 99.95th=[58983], 00:10:27.367 | 99.99th=[58983] 00:10:27.367 bw ( KiB/s): min=11256, max=17416, per=19.98%, avg=14336.00, stdev=4355.78, samples=2 00:10:27.367 iops : min= 2814, max= 4354, avg=3584.00, stdev=1088.94, samples=2 00:10:27.367 lat (msec) : 10=17.36%, 20=50.39%, 50=31.23%, 100=1.02% 00:10:27.367 cpu : usr=2.29%, sys=4.78%, ctx=355, majf=0, minf=1 00:10:27.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:27.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.367 issued rwts: total=3445,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.367 job1: (groupid=0, jobs=1): err= 0: pid=1425905: Wed Jul 24 19:11:13 2024 00:10:27.367 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:10:27.367 slat (nsec): min=1726, max=14622k, avg=91402.37, stdev=578614.26 00:10:27.367 clat (usec): min=5577, max=37987, avg=12113.41, stdev=4273.84 00:10:27.367 lat (usec): min=5585, max=41343, avg=12204.81, stdev=4308.55 00:10:27.367 clat percentiles (usec): 00:10:27.367 | 1.00th=[ 6456], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 9241], 00:10:27.367 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[10945], 60.00th=[11731], 00:10:27.367 | 70.00th=[12256], 80.00th=[13435], 90.00th=[18220], 95.00th=[21365], 00:10:27.367 | 99.00th=[26608], 99.50th=[29230], 99.90th=[38011], 99.95th=[38011], 00:10:27.367 | 99.99th=[38011] 00:10:27.367 write: IOPS=5218, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1007msec); 0 zone resets 00:10:27.367 slat (usec): min=2, max=12445, avg=93.18, stdev=635.92 00:10:27.367 clat (usec): min=4304, max=37560, avg=12434.34, stdev=4572.79 00:10:27.367 lat (usec): min=4310, max=37571, avg=12527.53, stdev=4614.53 00:10:27.367 clat percentiles (usec): 00:10:27.367 | 1.00th=[ 5407], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[ 9503], 00:10:27.367 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[11076], 60.00th=[11600], 00:10:27.367 | 70.00th=[12518], 80.00th=[15401], 90.00th=[18482], 95.00th=[22414], 00:10:27.367 | 99.00th=[31065], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:10:27.367 | 99.99th=[37487] 00:10:27.367 bw ( KiB/s): min=20480, max=20600, per=28.62%, avg=20540.00, stdev=84.85, samples=2 00:10:27.367 iops : min= 5120, max= 5150, avg=5135.00, stdev=21.21, samples=2 00:10:27.367 lat (msec) : 10=32.95%, 20=60.31%, 50=6.74% 00:10:27.367 cpu : usr=4.87%, sys=7.75%, ctx=361, majf=0, minf=1 00:10:27.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:27.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.367 issued rwts: total=5120,5255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.367 job2: (groupid=0, jobs=1): err= 0: pid=1425906: Wed Jul 24 19:11:13 2024 00:10:27.367 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:10:27.367 slat (nsec): min=1785, max=12511k, avg=109512.49, stdev=759319.65 00:10:27.367 clat (usec): min=3789, max=50717, avg=15218.73, stdev=6584.38 00:10:27.367 lat (usec): min=3801, max=56565, avg=15328.24, stdev=6627.90 00:10:27.367 clat percentiles (usec): 00:10:27.367 | 1.00th=[ 6063], 5.00th=[ 8291], 10.00th=[ 9110], 20.00th=[10814], 00:10:27.367 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12256], 60.00th=[14222], 00:10:27.367 | 70.00th=[17433], 80.00th=[19268], 90.00th=[23725], 95.00th=[27919], 00:10:27.367 | 99.00th=[35390], 99.50th=[39060], 99.90th=[50594], 99.95th=[50594], 00:10:27.367 | 99.99th=[50594] 00:10:27.367 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:10:27.367 slat (usec): min=2, max=8521, avg=73.62, stdev=504.29 00:10:27.367 clat (usec): min=1145, max=40496, avg=11246.44, stdev=4682.61 00:10:27.367 lat (usec): min=1413, max=40507, avg=11320.07, stdev=4695.54 00:10:27.367 clat percentiles (usec): 00:10:27.367 | 1.00th=[ 2540], 5.00th=[ 4948], 10.00th=[ 5997], 20.00th=[ 8029], 00:10:27.367 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[11076], 60.00th=[11863], 00:10:27.367 | 70.00th=[12387], 80.00th=[13042], 90.00th=[14746], 95.00th=[18482], 00:10:27.367 | 99.00th=[28967], 99.50th=[34341], 99.90th=[38011], 99.95th=[38011], 00:10:27.367 | 99.99th=[40633] 00:10:27.367 bw ( KiB/s): min=19416, max=20480, per=27.80%, avg=19948.00, stdev=752.36, samples=2 00:10:27.367 iops : min= 4854, max= 5120, avg=4987.00, stdev=188.09, samples=2 00:10:27.367 lat (msec) : 2=0.12%, 4=1.28%, 10=22.63%, 20=65.09%, 50=10.71% 00:10:27.367 lat (msec) : 100=0.17% 00:10:27.367 cpu : usr=4.18%, sys=6.47%, ctx=427, majf=0, minf=1 00:10:27.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:27.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.367 issued rwts: total=4608,5115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.367 job3: (groupid=0, jobs=1): err= 0: pid=1425907: Wed Jul 24 19:11:13 2024 00:10:27.368 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:10:27.368 slat (usec): min=2, max=14327, avg=120.12, stdev=716.74 00:10:27.368 clat (usec): min=4365, max=35690, avg=15443.27, stdev=5069.97 00:10:27.368 lat (usec): min=4369, max=35719, avg=15563.39, stdev=5123.77 00:10:27.368 clat percentiles (usec): 00:10:27.368 | 1.00th=[ 4686], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[11207], 00:10:27.368 | 30.00th=[12125], 40.00th=[13435], 50.00th=[14484], 60.00th=[16581], 00:10:27.368 | 70.00th=[17695], 80.00th=[19006], 90.00th=[21890], 95.00th=[25560], 00:10:27.368 | 99.00th=[30278], 99.50th=[31851], 99.90th=[33424], 99.95th=[33424], 00:10:27.368 | 99.99th=[35914] 00:10:27.368 write: IOPS=4104, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1002msec); 0 zone resets 00:10:27.368 slat (usec): min=3, max=13073, avg=112.71, stdev=747.09 00:10:27.368 clat (usec): min=895, max=35588, avg=15396.25, stdev=4035.62 00:10:27.368 lat (usec): min=4228, max=35629, avg=15508.96, stdev=4080.26 00:10:27.368 clat percentiles (usec): 00:10:27.368 | 1.00th=[ 7242], 5.00th=[10421], 10.00th=[11207], 20.00th=[11600], 00:10:27.368 | 30.00th=[12649], 40.00th=[14222], 50.00th=[14746], 60.00th=[16450], 00:10:27.368 | 70.00th=[17695], 80.00th=[18482], 90.00th=[19792], 95.00th=[22414], 00:10:27.368 | 99.00th=[28443], 99.50th=[29492], 99.90th=[29754], 99.95th=[30802], 00:10:27.368 | 99.99th=[35390] 00:10:27.368 bw ( KiB/s): min=16384, max=16384, per=22.83%, avg=16384.00, stdev= 0.00, samples=1 00:10:27.368 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:27.368 lat (usec) : 1000=0.01% 00:10:27.368 lat (msec) : 10=8.23%, 20=79.40%, 50=12.35% 00:10:27.368 cpu : usr=5.49%, sys=7.89%, ctx=281, majf=0, minf=1 00:10:27.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:27.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.368 issued rwts: total=4096,4113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.368 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.368 00:10:27.368 Run status group 0 (all jobs): 00:10:27.368 READ: bw=67.0MiB/s (70.2MB/s), 13.4MiB/s-19.9MiB/s (14.0MB/s-20.8MB/s), io=67.5MiB (70.7MB), run=1002-1007msec 00:10:27.368 WRITE: bw=70.1MiB/s (73.5MB/s), 13.9MiB/s-20.4MiB/s (14.6MB/s-21.4MB/s), io=70.6MiB (74.0MB), run=1002-1007msec 00:10:27.368 00:10:27.368 Disk stats (read/write): 00:10:27.368 nvme0n1: ios=3101/3215, merge=0/0, ticks=19347/16544, in_queue=35891, util=97.09% 00:10:27.368 nvme0n2: ios=4174/4608, merge=0/0, ticks=23764/26564, in_queue=50328, util=88.63% 00:10:27.368 nvme0n3: ios=3932/4096, merge=0/0, ticks=43832/38200, in_queue=82032, util=87.30% 00:10:27.368 nvme0n4: ios=3130/3446, merge=0/0, ticks=26848/23661, in_queue=50509, util=99.46% 00:10:27.368 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:27.368 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1426171 00:10:27.368 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:27.368 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:27.368 [global] 00:10:27.368 thread=1 00:10:27.368 invalidate=1 00:10:27.368 rw=read 00:10:27.368 time_based=1 00:10:27.368 runtime=10 00:10:27.368 ioengine=libaio 00:10:27.368 direct=1 00:10:27.368 bs=4096 00:10:27.368 iodepth=1 00:10:27.368 norandommap=1 00:10:27.368 numjobs=1 00:10:27.368 00:10:27.368 [job0] 00:10:27.368 filename=/dev/nvme0n1 00:10:27.368 [job1] 00:10:27.368 filename=/dev/nvme0n2 00:10:27.368 [job2] 00:10:27.368 filename=/dev/nvme0n3 00:10:27.368 [job3] 00:10:27.368 filename=/dev/nvme0n4 00:10:27.368 Could not set queue depth (nvme0n1) 00:10:27.368 Could not set queue depth (nvme0n2) 00:10:27.368 Could not set queue depth (nvme0n3) 00:10:27.368 Could not set queue depth (nvme0n4) 00:10:27.626 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.626 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.626 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.626 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.626 fio-3.35 00:10:27.626 Starting 4 threads 00:10:30.918 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:30.918 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:30.918 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=266240, buflen=4096 00:10:30.918 fio: pid=1426330, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:30.918 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.918 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:30.918 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=27070464, buflen=4096 00:10:30.918 fio: pid=1426329, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:30.918 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=33157120, buflen=4096 00:10:30.918 fio: pid=1426327, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:30.918 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.918 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:31.179 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=7229440, buflen=4096 00:10:31.179 fio: pid=1426328, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:31.179 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.179 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:31.179 00:10:31.179 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1426327: Wed Jul 24 19:11:17 2024 00:10:31.179 read: IOPS=2703, BW=10.6MiB/s (11.1MB/s)(31.6MiB/2995msec) 00:10:31.179 slat (usec): min=6, max=7994, avg=11.04, stdev=121.55 00:10:31.179 clat (usec): min=249, max=622, avg=355.27, stdev=22.58 00:10:31.179 lat (usec): min=256, max=8510, avg=366.31, stdev=126.60 00:10:31.179 clat percentiles (usec): 00:10:31.179 | 1.00th=[ 285], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 343], 00:10:31.179 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:10:31.179 | 70.00th=[ 363], 80.00th=[ 367], 90.00th=[ 375], 95.00th=[ 383], 00:10:31.179 | 99.00th=[ 437], 99.50th=[ 441], 99.90th=[ 469], 99.95th=[ 519], 00:10:31.179 | 99.99th=[ 627] 00:10:31.179 bw ( KiB/s): min=10928, max=11064, per=52.96%, avg=10973.20, stdev=55.13, samples=5 00:10:31.179 iops : min= 2732, max= 2766, avg=2743.20, stdev=13.75, samples=5 00:10:31.179 lat (usec) : 250=0.01%, 500=99.90%, 750=0.07% 00:10:31.179 cpu : usr=0.90%, sys=3.17%, ctx=8098, majf=0, minf=1 00:10:31.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.179 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.179 issued rwts: total=8096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.179 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1426328: Wed Jul 24 19:11:17 2024 00:10:31.179 read: IOPS=553, BW=2212KiB/s (2265kB/s)(7060KiB/3192msec) 00:10:31.179 slat (usec): min=5, max=11434, avg=20.98, stdev=328.99 00:10:31.179 clat (usec): min=226, max=42017, avg=1771.09, stdev=7374.12 00:10:31.179 lat (usec): min=236, max=42042, avg=1792.07, stdev=7382.36 00:10:31.179 clat percentiles (usec): 00:10:31.179 | 1.00th=[ 253], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 338], 00:10:31.179 | 30.00th=[ 363], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 404], 00:10:31.179 | 70.00th=[ 412], 80.00th=[ 424], 90.00th=[ 465], 95.00th=[ 519], 00:10:31.179 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:31.180 | 99.99th=[42206] 00:10:31.180 bw ( KiB/s): min= 95, max= 6970, per=9.42%, avg=1952.17, stdev=2992.04, samples=6 00:10:31.180 iops : min= 23, max= 1742, avg=487.83, stdev=747.93, samples=6 00:10:31.180 lat (usec) : 250=0.96%, 500=91.96%, 750=3.57% 00:10:31.180 lat (msec) : 10=0.06%, 50=3.40% 00:10:31.180 cpu : usr=0.16%, sys=0.78%, ctx=1772, majf=0, minf=1 00:10:31.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.180 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.180 issued rwts: total=1766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.180 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1426329: Wed Jul 24 19:11:17 2024 00:10:31.180 read: IOPS=2327, BW=9308KiB/s (9532kB/s)(25.8MiB/2840msec) 00:10:31.180 slat (nsec): min=7012, max=76132, avg=9807.20, stdev=2071.12 00:10:31.180 clat (usec): min=237, max=41949, avg=414.11, stdev=1306.43 00:10:31.180 lat (usec): min=246, max=41974, avg=423.92, stdev=1307.09 00:10:31.180 clat percentiles (usec): 00:10:31.180 | 1.00th=[ 322], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 359], 00:10:31.180 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 371], 00:10:31.180 | 70.00th=[ 375], 80.00th=[ 379], 90.00th=[ 388], 95.00th=[ 400], 00:10:31.180 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[35914], 99.95th=[41157], 00:10:31.180 | 99.99th=[42206] 00:10:31.180 bw ( KiB/s): min= 9400, max=10562, per=49.71%, avg=10299.60, stdev=504.88, samples=5 00:10:31.180 iops : min= 2350, max= 2640, avg=2574.80, stdev=126.16, samples=5 00:10:31.180 lat (usec) : 250=0.02%, 500=99.09%, 750=0.74% 00:10:31.180 lat (msec) : 2=0.03%, 50=0.11% 00:10:31.180 cpu : usr=1.55%, sys=3.98%, ctx=6613, majf=0, minf=1 00:10:31.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.180 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.180 issued rwts: total=6610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.180 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1426330: Wed Jul 24 19:11:17 2024 00:10:31.180 read: IOPS=24, BW=98.1KiB/s (100kB/s)(260KiB/2650msec) 00:10:31.180 slat (nsec): min=11099, max=32270, avg=25090.44, stdev=2532.80 00:10:31.180 clat (usec): min=462, max=43096, avg=40418.21, stdev=5043.89 00:10:31.180 lat (usec): min=486, max=43128, avg=40443.29, stdev=5044.07 00:10:31.180 clat percentiles (usec): 00:10:31.180 | 1.00th=[ 461], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:31.180 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:31.180 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:31.180 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:31.180 | 99.99th=[43254] 00:10:31.180 bw ( KiB/s): min= 96, max= 104, per=0.48%, avg=99.20, stdev= 4.38, samples=5 00:10:31.180 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:10:31.180 lat (usec) : 500=1.52% 00:10:31.180 lat (msec) : 50=96.97% 00:10:31.180 cpu : usr=0.00%, sys=0.15%, ctx=66, majf=0, minf=2 00:10:31.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.180 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.180 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.180 00:10:31.180 Run status group 0 (all jobs): 00:10:31.180 READ: bw=20.2MiB/s (21.2MB/s), 98.1KiB/s-10.6MiB/s (100kB/s-11.1MB/s), io=64.6MiB (67.7MB), run=2650-3192msec 00:10:31.180 00:10:31.180 Disk stats (read/write): 00:10:31.180 nvme0n1: ios=7750/0, merge=0/0, ticks=2692/0, in_queue=2692, util=94.09% 00:10:31.180 nvme0n2: ios=1631/0, merge=0/0, ticks=4012/0, in_queue=4012, util=99.50% 00:10:31.180 nvme0n3: ios=6610/0, merge=0/0, ticks=2665/0, in_queue=2665, util=96.12% 00:10:31.180 nvme0n4: ios=63/0, merge=0/0, ticks=2545/0, in_queue=2545, util=96.41% 00:10:31.180 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.180 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:31.443 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.443 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:31.702 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.702 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:31.702 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.702 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:31.961 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:31.961 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1426171 00:10:31.961 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:31.961 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:32.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.221 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:32.221 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:32.221 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:32.221 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.221 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:32.221 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.221 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:32.221 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:32.221 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:32.221 nvmf hotplug test: fio failed as expected 00:10:32.221 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:32.480 rmmod nvme_tcp 00:10:32.480 rmmod nvme_fabrics 00:10:32.480 rmmod nvme_keyring 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1423093 ']' 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1423093 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1423093 ']' 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1423093 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1423093 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1423093' 00:10:32.480 killing process with pid 1423093 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1423093 00:10:32.480 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1423093 00:10:32.740 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.740 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.740 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.740 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.740 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.740 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.740 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.740 19:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.648 19:11:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:34.648 00:10:34.648 real 0m28.330s 00:10:34.648 user 2m3.069s 00:10:34.648 sys 0m10.294s 00:10:34.648 19:11:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.648 19:11:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.648 ************************************ 00:10:34.648 END TEST nvmf_fio_target 00:10:34.648 ************************************ 00:10:34.908 19:11:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:34.908 19:11:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.908 19:11:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.908 19:11:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.908 ************************************ 00:10:34.908 START TEST nvmf_bdevio 00:10:34.908 ************************************ 00:10:34.908 19:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:34.908 * Looking for test storage... 00:10:34.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.908 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.908 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:34.908 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.908 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.908 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.908 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.908 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:34.909 19:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:10:41.483 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:41.742 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:41.742 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.742 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:41.743 Found net devices under 0000:af:00.0: cvl_0_0 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:41.743 Found net devices under 0000:af:00.1: cvl_0_1 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:41.743 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:42.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:10:42.003 00:10:42.003 --- 10.0.0.2 ping statistics --- 00:10:42.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.003 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:10:42.003 00:10:42.003 --- 10.0.0.1 ping statistics --- 00:10:42.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.003 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:42.003 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:42.004 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:42.004 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:42.004 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.004 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1430833 00:10:42.004 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1430833 00:10:42.004 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:42.004 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1430833 ']' 00:10:42.004 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.004 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.004 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.004 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.004 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.004 [2024-07-24 19:11:28.141251] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:42.004 [2024-07-24 19:11:28.141300] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.004 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.004 [2024-07-24 19:11:28.214460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.263 [2024-07-24 19:11:28.287591] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.263 [2024-07-24 19:11:28.287640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.263 [2024-07-24 19:11:28.287650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.263 [2024-07-24 19:11:28.287677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.263 [2024-07-24 19:11:28.287684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.263 [2024-07-24 19:11:28.287800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:42.263 [2024-07-24 19:11:28.287911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:42.263 [2024-07-24 19:11:28.288018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.263 [2024-07-24 19:11:28.288019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:42.830 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.830 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:42.830 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.830 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:42.830 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.830 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.830 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.830 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.830 19:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.830 [2024-07-24 19:11:28.999079] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.830 Malloc0 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.830 [2024-07-24 19:11:29.045598] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:42.830 { 00:10:42.830 "params": { 00:10:42.830 "name": "Nvme$subsystem", 00:10:42.830 "trtype": "$TEST_TRANSPORT", 00:10:42.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.830 "adrfam": "ipv4", 00:10:42.830 "trsvcid": "$NVMF_PORT", 00:10:42.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.830 "hdgst": ${hdgst:-false}, 00:10:42.830 "ddgst": ${ddgst:-false} 00:10:42.830 }, 00:10:42.830 "method": "bdev_nvme_attach_controller" 00:10:42.830 } 00:10:42.830 EOF 00:10:42.830 )") 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:42.830 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:42.830 "params": { 00:10:42.830 "name": "Nvme1", 00:10:42.830 "trtype": "tcp", 00:10:42.830 "traddr": "10.0.0.2", 00:10:42.830 "adrfam": "ipv4", 00:10:42.830 "trsvcid": "4420", 00:10:42.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.830 "hdgst": false, 00:10:42.830 "ddgst": false 00:10:42.830 }, 00:10:42.830 "method": "bdev_nvme_attach_controller" 00:10:42.830 }' 00:10:43.088 [2024-07-24 19:11:29.098122] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:43.088 [2024-07-24 19:11:29.098170] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431110 ] 00:10:43.088 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.088 [2024-07-24 19:11:29.168431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:43.088 [2024-07-24 19:11:29.239946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.088 [2024-07-24 19:11:29.240042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.088 [2024-07-24 19:11:29.240047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.346 I/O targets: 00:10:43.346 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:43.346 00:10:43.346 00:10:43.346 CUnit - A unit testing framework for C - Version 2.1-3 00:10:43.346 http://cunit.sourceforge.net/ 00:10:43.346 00:10:43.346 00:10:43.346 Suite: bdevio tests on: Nvme1n1 00:10:43.346 Test: blockdev write read block ...passed 00:10:43.604 Test: blockdev write zeroes read block ...passed 00:10:43.604 Test: blockdev write zeroes read no split ...passed 00:10:43.604 Test: blockdev write zeroes read split ...passed 00:10:43.604 Test: blockdev write zeroes read split partial ...passed 00:10:43.604 Test: blockdev reset ...[2024-07-24 19:11:29.730425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:43.604 [2024-07-24 19:11:29.730494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496810 (9): Bad file descriptor 00:10:43.604 [2024-07-24 19:11:29.828125] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:43.604 passed 00:10:43.604 Test: blockdev write read 8 blocks ...passed 00:10:43.604 Test: blockdev write read size > 128k ...passed 00:10:43.604 Test: blockdev write read invalid size ...passed 00:10:43.863 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:43.863 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:43.863 Test: blockdev write read max offset ...passed 00:10:43.863 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:43.863 Test: blockdev writev readv 8 blocks ...passed 00:10:43.863 Test: blockdev writev readv 30 x 1block ...passed 00:10:43.863 Test: blockdev writev readv block ...passed 00:10:43.863 Test: blockdev writev readv size > 128k ...passed 00:10:43.863 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:43.863 Test: blockdev comparev and writev ...[2024-07-24 19:11:30.046225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.863 [2024-07-24 19:11:30.046255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:43.863 [2024-07-24 19:11:30.046275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.863 [2024-07-24 19:11:30.046292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:43.863 [2024-07-24 19:11:30.046610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.863 [2024-07-24 19:11:30.046623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:43.863 [2024-07-24 19:11:30.046637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.863 [2024-07-24 19:11:30.046647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:43.863 [2024-07-24 19:11:30.046941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.863 [2024-07-24 19:11:30.046954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:43.863 [2024-07-24 19:11:30.046969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.863 [2024-07-24 19:11:30.046980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:43.863 [2024-07-24 19:11:30.047290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.863 [2024-07-24 19:11:30.047302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:43.863 [2024-07-24 19:11:30.047317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.863 [2024-07-24 19:11:30.047327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:43.863 passed 00:10:44.122 Test: blockdev nvme passthru rw ...passed 00:10:44.122 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:11:30.129175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:44.122 [2024-07-24 19:11:30.129201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:44.122 [2024-07-24 19:11:30.129391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:44.122 [2024-07-24 19:11:30.129403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:44.122 [2024-07-24 19:11:30.129579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:44.122 [2024-07-24 19:11:30.129591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:44.122 [2024-07-24 19:11:30.129774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:44.122 [2024-07-24 19:11:30.129787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:44.122 passed 00:10:44.122 Test: blockdev nvme admin passthru ...passed 00:10:44.122 Test: blockdev copy ...passed 00:10:44.122 00:10:44.122 Run Summary: Type Total Ran Passed Failed Inactive 00:10:44.122 suites 1 1 n/a 0 0 00:10:44.122 tests 23 23 23 0 0 00:10:44.122 asserts 152 152 152 0 n/a 00:10:44.122 00:10:44.122 Elapsed time = 1.324 seconds 00:10:44.122 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.122 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.122 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.122 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.122 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:44.122 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:44.122 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:44.122 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:44.122 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:44.122 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:44.122 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:44.122 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:44.381 rmmod nvme_tcp 00:10:44.381 rmmod nvme_fabrics 00:10:44.381 rmmod nvme_keyring 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1430833 ']' 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1430833 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1430833 ']' 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1430833 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1430833 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1430833' 00:10:44.381 killing process with pid 1430833 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1430833 00:10:44.381 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1430833 00:10:44.640 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:44.640 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:44.640 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:44.640 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:44.640 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:44.640 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.640 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.640 19:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.547 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:46.547 00:10:46.547 real 0m11.777s 00:10:46.547 user 0m13.814s 00:10:46.547 sys 0m6.019s 00:10:46.547 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.547 19:11:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.547 ************************************ 00:10:46.547 END TEST nvmf_bdevio 00:10:46.547 ************************************ 00:10:46.547 19:11:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:46.547 00:10:46.547 real 4m52.700s 00:10:46.547 user 10m51.119s 00:10:46.547 sys 2m1.286s 00:10:46.547 19:11:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.547 19:11:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:46.547 ************************************ 00:10:46.547 END TEST nvmf_target_core 00:10:46.547 ************************************ 00:10:46.806 19:11:32 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:46.806 19:11:32 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:46.806 19:11:32 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.806 19:11:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.806 ************************************ 00:10:46.806 START TEST nvmf_target_extra 00:10:46.806 ************************************ 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:46.806 * Looking for test storage... 00:10:46.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.806 19:11:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.807 19:11:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.807 ************************************ 00:10:46.807 START TEST nvmf_example 00:10:46.807 ************************************ 00:10:46.807 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:47.066 * Looking for test storage... 00:10:47.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.066 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:47.067 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.673 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:53.674 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:53.674 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:53.674 Found net devices under 0000:af:00.0: cvl_0_0 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:53.674 Found net devices under 0000:af:00.1: cvl_0_1 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:53.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:10:53.674 00:10:53.674 --- 10.0.0.2 ping statistics --- 00:10:53.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.674 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:10:53.674 00:10:53.674 --- 10.0.0.1 ping statistics --- 00:10:53.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.674 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1435031 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1435031 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1435031 ']' 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.674 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.934 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.504 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.504 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:54.504 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:54.504 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.504 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:54.763 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:54.763 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.978 Initializing NVMe Controllers 00:11:06.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:06.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:06.978 Initialization complete. Launching workers. 00:11:06.978 ======================================================== 00:11:06.978 Latency(us) 00:11:06.978 Device Information : IOPS MiB/s Average min max 00:11:06.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17856.80 69.75 3585.46 678.20 15506.54 00:11:06.978 ======================================================== 00:11:06.978 Total : 17856.80 69.75 3585.46 678.20 15506.54 00:11:06.979 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:06.979 rmmod nvme_tcp 00:11:06.979 rmmod nvme_fabrics 00:11:06.979 rmmod nvme_keyring 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1435031 ']' 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1435031 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1435031 ']' 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1435031 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1435031 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1435031' 00:11:06.979 killing process with pid 1435031 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1435031 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1435031 00:11:06.979 nvmf threads initialize successfully 00:11:06.979 bdev subsystem init successfully 00:11:06.979 created a nvmf target service 00:11:06.979 create targets's poll groups done 00:11:06.979 all subsystems of target started 00:11:06.979 nvmf target is running 00:11:06.979 all subsystems of target stopped 00:11:06.979 destroy targets's poll groups done 00:11:06.979 destroyed the nvmf target service 00:11:06.979 bdev subsystem finish successfully 00:11:06.979 nvmf threads destroy successfully 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.979 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.547 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:07.547 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:07.547 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.547 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.547 00:11:07.547 real 0m20.584s 00:11:07.547 user 0m45.738s 00:11:07.547 sys 0m7.407s 00:11:07.547 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.547 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.547 ************************************ 00:11:07.547 END TEST nvmf_example 00:11:07.547 ************************************ 00:11:07.547 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:07.547 19:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.547 19:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.547 19:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:07.547 ************************************ 00:11:07.547 START TEST nvmf_filesystem 00:11:07.547 ************************************ 00:11:07.547 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:07.809 * Looking for test storage... 00:11:07.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:07.809 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:07.810 #define SPDK_CONFIG_H 00:11:07.810 #define SPDK_CONFIG_APPS 1 00:11:07.810 #define SPDK_CONFIG_ARCH native 00:11:07.810 #undef SPDK_CONFIG_ASAN 00:11:07.810 #undef SPDK_CONFIG_AVAHI 00:11:07.810 #undef SPDK_CONFIG_CET 00:11:07.810 #define SPDK_CONFIG_COVERAGE 1 00:11:07.810 #define SPDK_CONFIG_CROSS_PREFIX 00:11:07.810 #undef SPDK_CONFIG_CRYPTO 00:11:07.810 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:07.810 #undef SPDK_CONFIG_CUSTOMOCF 00:11:07.810 #undef SPDK_CONFIG_DAOS 00:11:07.810 #define SPDK_CONFIG_DAOS_DIR 00:11:07.810 #define SPDK_CONFIG_DEBUG 1 00:11:07.810 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:07.810 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:07.810 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:07.810 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:07.810 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:07.810 #undef SPDK_CONFIG_DPDK_UADK 00:11:07.810 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:07.810 #define SPDK_CONFIG_EXAMPLES 1 00:11:07.810 #undef SPDK_CONFIG_FC 00:11:07.810 #define SPDK_CONFIG_FC_PATH 00:11:07.810 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:07.810 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:07.810 #undef SPDK_CONFIG_FUSE 00:11:07.810 #undef SPDK_CONFIG_FUZZER 00:11:07.810 #define SPDK_CONFIG_FUZZER_LIB 00:11:07.810 #undef SPDK_CONFIG_GOLANG 00:11:07.810 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:07.810 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:07.810 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:07.810 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:07.810 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:07.810 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:07.810 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:07.810 #define SPDK_CONFIG_IDXD 1 00:11:07.810 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:07.810 #undef SPDK_CONFIG_IPSEC_MB 00:11:07.810 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:07.810 #define SPDK_CONFIG_ISAL 1 00:11:07.810 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:07.810 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:07.810 #define SPDK_CONFIG_LIBDIR 00:11:07.810 #undef SPDK_CONFIG_LTO 00:11:07.810 #define SPDK_CONFIG_MAX_LCORES 128 00:11:07.810 #define SPDK_CONFIG_NVME_CUSE 1 00:11:07.810 #undef SPDK_CONFIG_OCF 00:11:07.810 #define SPDK_CONFIG_OCF_PATH 00:11:07.810 #define SPDK_CONFIG_OPENSSL_PATH 00:11:07.810 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:07.810 #define SPDK_CONFIG_PGO_DIR 00:11:07.810 #undef SPDK_CONFIG_PGO_USE 00:11:07.810 #define SPDK_CONFIG_PREFIX /usr/local 00:11:07.810 #undef SPDK_CONFIG_RAID5F 00:11:07.810 #undef SPDK_CONFIG_RBD 00:11:07.810 #define SPDK_CONFIG_RDMA 1 00:11:07.810 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:07.810 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:07.810 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:07.810 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:07.810 #define SPDK_CONFIG_SHARED 1 00:11:07.810 #undef SPDK_CONFIG_SMA 00:11:07.810 #define SPDK_CONFIG_TESTS 1 00:11:07.810 #undef SPDK_CONFIG_TSAN 00:11:07.810 #define SPDK_CONFIG_UBLK 1 00:11:07.810 #define SPDK_CONFIG_UBSAN 1 00:11:07.810 #undef SPDK_CONFIG_UNIT_TESTS 00:11:07.810 #undef SPDK_CONFIG_URING 00:11:07.810 #define SPDK_CONFIG_URING_PATH 00:11:07.810 #undef SPDK_CONFIG_URING_ZNS 00:11:07.810 #undef SPDK_CONFIG_USDT 00:11:07.810 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:07.810 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:07.810 #define SPDK_CONFIG_VFIO_USER 1 00:11:07.810 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:07.810 #define SPDK_CONFIG_VHOST 1 00:11:07.810 #define SPDK_CONFIG_VIRTIO 1 00:11:07.810 #undef SPDK_CONFIG_VTUNE 00:11:07.810 #define SPDK_CONFIG_VTUNE_DIR 00:11:07.810 #define SPDK_CONFIG_WERROR 1 00:11:07.810 #define SPDK_CONFIG_WPDK_DIR 00:11:07.810 #undef SPDK_CONFIG_XNVME 00:11:07.810 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.810 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:07.811 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:07.812 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1437371 ]] 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1437371 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.nBwOm8 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.nBwOm8/tests/target /tmp/spdk.nBwOm8 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=955215872 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4329213952 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=55328952320 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742276608 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6413324288 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30861217792 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12325425152 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348456960 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23031808 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30870282240 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=856064 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6174220288 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174224384 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:07.813 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:07.814 * Looking for test storage... 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=55328952320 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8627916800 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.814 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.814 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.814 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.814 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.814 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.814 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.814 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.814 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.814 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.814 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.814 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:07.815 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.387 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.387 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:14.387 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:14.387 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:14.387 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:14.387 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:14.387 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:14.387 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:14.387 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:14.387 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:14.387 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:14.647 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:14.647 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:14.647 Found net devices under 0000:af:00.0: cvl_0_0 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:14.647 Found net devices under 0000:af:00.1: cvl_0_1 00:11:14.647 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:14.648 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:14.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:11:14.906 00:11:14.906 --- 10.0.0.2 ping statistics --- 00:11:14.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.906 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:11:14.906 00:11:14.906 --- 10.0.0.1 ping statistics --- 00:11:14.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.906 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.906 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.906 ************************************ 00:11:14.906 START TEST nvmf_filesystem_no_in_capsule 00:11:14.906 ************************************ 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1440768 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1440768 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1440768 ']' 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.906 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.907 [2024-07-24 19:12:01.082220] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:14.907 [2024-07-24 19:12:01.082272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.907 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.166 [2024-07-24 19:12:01.155683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.166 [2024-07-24 19:12:01.229990] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.166 [2024-07-24 19:12:01.230033] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.166 [2024-07-24 19:12:01.230047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.166 [2024-07-24 19:12:01.230058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.166 [2024-07-24 19:12:01.230068] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.166 [2024-07-24 19:12:01.230132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.166 [2024-07-24 19:12:01.230226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.166 [2024-07-24 19:12:01.230328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.166 [2024-07-24 19:12:01.230332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.734 [2024-07-24 19:12:01.941939] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.734 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.993 Malloc1 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.993 [2024-07-24 19:12:02.090992] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:15.993 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.994 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.994 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.994 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:15.994 { 00:11:15.994 "name": "Malloc1", 00:11:15.994 "aliases": [ 00:11:15.994 "66dd49ee-6107-493a-aaf5-f504622aaf0e" 00:11:15.994 ], 00:11:15.994 "product_name": "Malloc disk", 00:11:15.994 "block_size": 512, 00:11:15.994 "num_blocks": 1048576, 00:11:15.994 "uuid": "66dd49ee-6107-493a-aaf5-f504622aaf0e", 00:11:15.994 "assigned_rate_limits": { 00:11:15.994 "rw_ios_per_sec": 0, 00:11:15.994 "rw_mbytes_per_sec": 0, 00:11:15.994 "r_mbytes_per_sec": 0, 00:11:15.994 "w_mbytes_per_sec": 0 00:11:15.994 }, 00:11:15.994 "claimed": true, 00:11:15.994 "claim_type": "exclusive_write", 00:11:15.994 "zoned": false, 00:11:15.994 "supported_io_types": { 00:11:15.994 "read": true, 00:11:15.994 "write": true, 00:11:15.994 "unmap": true, 00:11:15.994 "flush": true, 00:11:15.994 "reset": true, 00:11:15.994 "nvme_admin": false, 00:11:15.994 "nvme_io": false, 00:11:15.994 "nvme_io_md": false, 00:11:15.994 "write_zeroes": true, 00:11:15.994 "zcopy": true, 00:11:15.994 "get_zone_info": false, 00:11:15.994 "zone_management": false, 00:11:15.994 "zone_append": false, 00:11:15.994 "compare": false, 00:11:15.994 "compare_and_write": false, 00:11:15.994 "abort": true, 00:11:15.994 "seek_hole": false, 00:11:15.994 "seek_data": false, 00:11:15.994 "copy": true, 00:11:15.994 "nvme_iov_md": false 00:11:15.994 }, 00:11:15.994 "memory_domains": [ 00:11:15.994 { 00:11:15.994 "dma_device_id": "system", 00:11:15.994 "dma_device_type": 1 00:11:15.994 }, 00:11:15.994 { 00:11:15.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.994 "dma_device_type": 2 00:11:15.994 } 00:11:15.994 ], 00:11:15.994 "driver_specific": {} 00:11:15.994 } 00:11:15.994 ]' 00:11:15.994 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:15.994 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:15.994 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:15.994 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:15.994 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:15.994 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:15.994 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:15.994 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.371 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:17.371 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:17.371 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.371 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:17.371 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:19.905 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:20.522 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.459 ************************************ 00:11:21.459 START TEST filesystem_ext4 00:11:21.459 ************************************ 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:21.459 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:21.459 mke2fs 1.46.5 (30-Dec-2021) 00:11:21.718 Discarding device blocks: 0/522240 done 00:11:21.718 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:21.719 Filesystem UUID: 78fd0822-879a-4b69-b0a8-71d819dfb4ad 00:11:21.719 Superblock backups stored on blocks: 00:11:21.719 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:21.719 00:11:21.719 Allocating group tables: 0/64 done 00:11:21.719 Writing inode tables: 0/64 done 00:11:21.719 Creating journal (8192 blocks): done 00:11:22.804 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:11:22.804 00:11:22.804 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:22.804 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:23.372 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1440768 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:23.632 00:11:23.632 real 0m2.074s 00:11:23.632 user 0m0.033s 00:11:23.632 sys 0m0.075s 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:23.632 ************************************ 00:11:23.632 END TEST filesystem_ext4 00:11:23.632 ************************************ 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.632 ************************************ 00:11:23.632 START TEST filesystem_btrfs 00:11:23.632 ************************************ 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:23.632 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:24.200 btrfs-progs v6.6.2 00:11:24.200 See https://btrfs.readthedocs.io for more information. 00:11:24.200 00:11:24.200 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:24.200 NOTE: several default settings have changed in version 5.15, please make sure 00:11:24.200 this does not affect your deployments: 00:11:24.200 - DUP for metadata (-m dup) 00:11:24.200 - enabled no-holes (-O no-holes) 00:11:24.200 - enabled free-space-tree (-R free-space-tree) 00:11:24.200 00:11:24.200 Label: (null) 00:11:24.200 UUID: 37fd7e62-909a-4d95-814f-73f354526e38 00:11:24.200 Node size: 16384 00:11:24.200 Sector size: 4096 00:11:24.200 Filesystem size: 510.00MiB 00:11:24.200 Block group profiles: 00:11:24.200 Data: single 8.00MiB 00:11:24.200 Metadata: DUP 32.00MiB 00:11:24.200 System: DUP 8.00MiB 00:11:24.200 SSD detected: yes 00:11:24.200 Zoned device: no 00:11:24.200 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:24.200 Runtime features: free-space-tree 00:11:24.200 Checksum: crc32c 00:11:24.200 Number of devices: 1 00:11:24.200 Devices: 00:11:24.200 ID SIZE PATH 00:11:24.200 1 510.00MiB /dev/nvme0n1p1 00:11:24.200 00:11:24.200 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:24.200 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:24.770 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:24.770 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1440768 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.029 00:11:25.029 real 0m1.281s 00:11:25.029 user 0m0.030s 00:11:25.029 sys 0m0.139s 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:25.029 ************************************ 00:11:25.029 END TEST filesystem_btrfs 00:11:25.029 ************************************ 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.029 ************************************ 00:11:25.029 START TEST filesystem_xfs 00:11:25.029 ************************************ 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:25.029 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:25.029 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:25.030 = sectsz=512 attr=2, projid32bit=1 00:11:25.030 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:25.030 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:25.030 data = bsize=4096 blocks=130560, imaxpct=25 00:11:25.030 = sunit=0 swidth=0 blks 00:11:25.030 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:25.030 log =internal log bsize=4096 blocks=16384, version=2 00:11:25.030 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:25.030 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:26.407 Discarding blocks...Done. 00:11:26.407 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:26.407 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.785 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1440768 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:28.045 00:11:28.045 real 0m2.965s 00:11:28.045 user 0m0.024s 00:11:28.045 sys 0m0.087s 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:28.045 ************************************ 00:11:28.045 END TEST filesystem_xfs 00:11:28.045 ************************************ 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:28.045 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1440768 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1440768 ']' 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1440768 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1440768 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1440768' 00:11:28.305 killing process with pid 1440768 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1440768 00:11:28.305 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1440768 00:11:28.564 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:28.564 00:11:28.564 real 0m13.743s 00:11:28.564 user 0m53.706s 00:11:28.564 sys 0m1.766s 00:11:28.564 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.564 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.564 ************************************ 00:11:28.564 END TEST nvmf_filesystem_no_in_capsule 00:11:28.564 ************************************ 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.823 ************************************ 00:11:28.823 START TEST nvmf_filesystem_in_capsule 00:11:28.823 ************************************ 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1443831 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1443831 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1443831 ']' 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.823 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.823 [2024-07-24 19:12:14.914454] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:28.823 [2024-07-24 19:12:14.914501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.823 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.823 [2024-07-24 19:12:14.988508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.823 [2024-07-24 19:12:15.053841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.823 [2024-07-24 19:12:15.053887] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.823 [2024-07-24 19:12:15.053904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.823 [2024-07-24 19:12:15.053914] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.823 [2024-07-24 19:12:15.053923] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.823 [2024-07-24 19:12:15.054024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.823 [2024-07-24 19:12:15.054119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.823 [2024-07-24 19:12:15.054208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.823 [2024-07-24 19:12:15.054211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.762 [2024-07-24 19:12:15.776961] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.762 Malloc1 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.762 [2024-07-24 19:12:15.922309] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.762 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:29.762 { 00:11:29.762 "name": "Malloc1", 00:11:29.762 "aliases": [ 00:11:29.762 "26c5de8c-777d-46ff-8467-c01e89748328" 00:11:29.762 ], 00:11:29.762 "product_name": "Malloc disk", 00:11:29.762 "block_size": 512, 00:11:29.762 "num_blocks": 1048576, 00:11:29.762 "uuid": "26c5de8c-777d-46ff-8467-c01e89748328", 00:11:29.762 "assigned_rate_limits": { 00:11:29.762 "rw_ios_per_sec": 0, 00:11:29.762 "rw_mbytes_per_sec": 0, 00:11:29.762 "r_mbytes_per_sec": 0, 00:11:29.762 "w_mbytes_per_sec": 0 00:11:29.762 }, 00:11:29.762 "claimed": true, 00:11:29.762 "claim_type": "exclusive_write", 00:11:29.762 "zoned": false, 00:11:29.762 "supported_io_types": { 00:11:29.762 "read": true, 00:11:29.762 "write": true, 00:11:29.762 "unmap": true, 00:11:29.762 "flush": true, 00:11:29.762 "reset": true, 00:11:29.762 "nvme_admin": false, 00:11:29.762 "nvme_io": false, 00:11:29.762 "nvme_io_md": false, 00:11:29.762 "write_zeroes": true, 00:11:29.762 "zcopy": true, 00:11:29.762 "get_zone_info": false, 00:11:29.762 "zone_management": false, 00:11:29.762 "zone_append": false, 00:11:29.762 "compare": false, 00:11:29.762 "compare_and_write": false, 00:11:29.762 "abort": true, 00:11:29.762 "seek_hole": false, 00:11:29.762 "seek_data": false, 00:11:29.762 "copy": true, 00:11:29.762 "nvme_iov_md": false 00:11:29.762 }, 00:11:29.762 "memory_domains": [ 00:11:29.762 { 00:11:29.762 "dma_device_id": "system", 00:11:29.762 "dma_device_type": 1 00:11:29.762 }, 00:11:29.762 { 00:11:29.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.762 "dma_device_type": 2 00:11:29.762 } 00:11:29.762 ], 00:11:29.763 "driver_specific": {} 00:11:29.763 } 00:11:29.763 ]' 00:11:29.763 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:30.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:30.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:30.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:30.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:30.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:30.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:30.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.402 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:31.402 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:31.402 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.402 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:31.402 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:33.306 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:33.307 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:33.307 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:33.307 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:33.307 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:33.566 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:33.825 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:34.763 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:34.763 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:34.764 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:34.764 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.764 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.053 ************************************ 00:11:35.053 START TEST filesystem_in_capsule_ext4 00:11:35.053 ************************************ 00:11:35.054 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:35.054 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:35.054 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:35.054 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:35.054 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:35.054 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:35.054 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:35.054 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:35.054 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:35.054 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:35.054 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:35.054 mke2fs 1.46.5 (30-Dec-2021) 00:11:35.054 Discarding device blocks: 0/522240 done 00:11:35.054 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:35.054 Filesystem UUID: c3ed2690-f4f1-4359-b4e2-fd77caf88263 00:11:35.054 Superblock backups stored on blocks: 00:11:35.054 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:35.054 00:11:35.054 Allocating group tables: 0/64 done 00:11:35.054 Writing inode tables: 0/64 done 00:11:35.315 Creating journal (8192 blocks): done 00:11:36.142 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:11:36.142 00:11:36.142 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:36.142 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1443831 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:36.401 00:11:36.401 real 0m1.581s 00:11:36.401 user 0m0.023s 00:11:36.401 sys 0m0.083s 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.401 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:36.401 ************************************ 00:11:36.401 END TEST filesystem_in_capsule_ext4 00:11:36.401 ************************************ 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.661 ************************************ 00:11:36.661 START TEST filesystem_in_capsule_btrfs 00:11:36.661 ************************************ 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:36.661 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:36.920 btrfs-progs v6.6.2 00:11:36.920 See https://btrfs.readthedocs.io for more information. 00:11:36.920 00:11:36.920 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:36.920 NOTE: several default settings have changed in version 5.15, please make sure 00:11:36.920 this does not affect your deployments: 00:11:36.920 - DUP for metadata (-m dup) 00:11:36.920 - enabled no-holes (-O no-holes) 00:11:36.920 - enabled free-space-tree (-R free-space-tree) 00:11:36.920 00:11:36.920 Label: (null) 00:11:36.920 UUID: 39bdce90-de1b-4305-b71d-ae354081c600 00:11:36.920 Node size: 16384 00:11:36.920 Sector size: 4096 00:11:36.920 Filesystem size: 510.00MiB 00:11:36.920 Block group profiles: 00:11:36.920 Data: single 8.00MiB 00:11:36.920 Metadata: DUP 32.00MiB 00:11:36.920 System: DUP 8.00MiB 00:11:36.920 SSD detected: yes 00:11:36.920 Zoned device: no 00:11:36.920 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:36.920 Runtime features: free-space-tree 00:11:36.920 Checksum: crc32c 00:11:36.920 Number of devices: 1 00:11:36.920 Devices: 00:11:36.920 ID SIZE PATH 00:11:36.920 1 510.00MiB /dev/nvme0n1p1 00:11:36.920 00:11:36.920 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:36.920 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1443831 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.857 00:11:37.857 real 0m1.191s 00:11:37.857 user 0m0.038s 00:11:37.857 sys 0m0.133s 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:37.857 ************************************ 00:11:37.857 END TEST filesystem_in_capsule_btrfs 00:11:37.857 ************************************ 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.857 ************************************ 00:11:37.857 START TEST filesystem_in_capsule_xfs 00:11:37.857 ************************************ 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:37.857 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:37.857 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:37.857 = sectsz=512 attr=2, projid32bit=1 00:11:37.857 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:37.858 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:37.858 data = bsize=4096 blocks=130560, imaxpct=25 00:11:37.858 = sunit=0 swidth=0 blks 00:11:37.858 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:37.858 log =internal log bsize=4096 blocks=16384, version=2 00:11:37.858 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:37.858 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:38.795 Discarding blocks...Done. 00:11:38.795 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:38.795 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1443831 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:41.331 00:11:41.331 real 0m3.592s 00:11:41.331 user 0m0.038s 00:11:41.331 sys 0m0.074s 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.331 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:41.331 ************************************ 00:11:41.331 END TEST filesystem_in_capsule_xfs 00:11:41.331 ************************************ 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1443831 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1443831 ']' 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1443831 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:41.590 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.591 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1443831 00:11:41.850 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:41.850 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:41.850 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1443831' 00:11:41.850 killing process with pid 1443831 00:11:41.850 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1443831 00:11:41.850 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1443831 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:42.109 00:11:42.109 real 0m13.356s 00:11:42.109 user 0m52.099s 00:11:42.109 sys 0m1.872s 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.109 ************************************ 00:11:42.109 END TEST nvmf_filesystem_in_capsule 00:11:42.109 ************************************ 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:42.109 rmmod nvme_tcp 00:11:42.109 rmmod nvme_fabrics 00:11:42.109 rmmod nvme_keyring 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.109 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.647 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:44.647 00:11:44.647 real 0m36.711s 00:11:44.647 user 1m47.928s 00:11:44.647 sys 0m9.173s 00:11:44.647 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.647 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.647 ************************************ 00:11:44.647 END TEST nvmf_filesystem 00:11:44.647 ************************************ 00:11:44.647 19:12:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:44.647 19:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:44.647 19:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.647 19:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.647 ************************************ 00:11:44.647 START TEST nvmf_target_discovery 00:11:44.647 ************************************ 00:11:44.647 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:44.648 * Looking for test storage... 00:11:44.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:44.648 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.242 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:51.243 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:51.243 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:51.243 Found net devices under 0000:af:00.0: cvl_0_0 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:51.243 Found net devices under 0000:af:00.1: cvl_0_1 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:51.243 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:51.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:11:51.503 00:11:51.503 --- 10.0.0.2 ping statistics --- 00:11:51.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.503 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:11:51.503 00:11:51.503 --- 10.0.0.1 ping statistics --- 00:11:51.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.503 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1449826 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1449826 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1449826 ']' 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:51.503 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.503 [2024-07-24 19:12:37.675383] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:51.503 [2024-07-24 19:12:37.675433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.503 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.763 [2024-07-24 19:12:37.749437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.763 [2024-07-24 19:12:37.823340] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.763 [2024-07-24 19:12:37.823379] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.763 [2024-07-24 19:12:37.823393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.763 [2024-07-24 19:12:37.823405] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.763 [2024-07-24 19:12:37.823414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.763 [2024-07-24 19:12:37.823467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.763 [2024-07-24 19:12:37.823485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.763 [2024-07-24 19:12:37.823572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.763 [2024-07-24 19:12:37.823576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.333 [2024-07-24 19:12:38.526059] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.333 Null1 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.333 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.593 [2024-07-24 19:12:38.582379] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.593 Null2 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.593 Null3 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:52.593 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.594 Null4 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.594 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:11:52.854 00:11:52.854 Discovery Log Number of Records 6, Generation counter 6 00:11:52.854 =====Discovery Log Entry 0====== 00:11:52.854 trtype: tcp 00:11:52.854 adrfam: ipv4 00:11:52.854 subtype: current discovery subsystem 00:11:52.854 treq: not required 00:11:52.854 portid: 0 00:11:52.854 trsvcid: 4420 00:11:52.854 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:52.854 traddr: 10.0.0.2 00:11:52.854 eflags: explicit discovery connections, duplicate discovery information 00:11:52.854 sectype: none 00:11:52.854 =====Discovery Log Entry 1====== 00:11:52.854 trtype: tcp 00:11:52.854 adrfam: ipv4 00:11:52.854 subtype: nvme subsystem 00:11:52.854 treq: not required 00:11:52.854 portid: 0 00:11:52.854 trsvcid: 4420 00:11:52.854 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:52.854 traddr: 10.0.0.2 00:11:52.854 eflags: none 00:11:52.854 sectype: none 00:11:52.854 =====Discovery Log Entry 2====== 00:11:52.854 trtype: tcp 00:11:52.854 adrfam: ipv4 00:11:52.854 subtype: nvme subsystem 00:11:52.854 treq: not required 00:11:52.854 portid: 0 00:11:52.854 trsvcid: 4420 00:11:52.854 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:52.854 traddr: 10.0.0.2 00:11:52.854 eflags: none 00:11:52.854 sectype: none 00:11:52.854 =====Discovery Log Entry 3====== 00:11:52.854 trtype: tcp 00:11:52.854 adrfam: ipv4 00:11:52.854 subtype: nvme subsystem 00:11:52.854 treq: not required 00:11:52.854 portid: 0 00:11:52.854 trsvcid: 4420 00:11:52.854 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:52.854 traddr: 10.0.0.2 00:11:52.854 eflags: none 00:11:52.854 sectype: none 00:11:52.854 =====Discovery Log Entry 4====== 00:11:52.854 trtype: tcp 00:11:52.854 adrfam: ipv4 00:11:52.854 subtype: nvme subsystem 00:11:52.854 treq: not required 00:11:52.854 portid: 0 00:11:52.854 trsvcid: 4420 00:11:52.854 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:52.854 traddr: 10.0.0.2 00:11:52.854 eflags: none 00:11:52.854 sectype: none 00:11:52.854 =====Discovery Log Entry 5====== 00:11:52.854 trtype: tcp 00:11:52.854 adrfam: ipv4 00:11:52.854 subtype: discovery subsystem referral 00:11:52.854 treq: not required 00:11:52.854 portid: 0 00:11:52.854 trsvcid: 4430 00:11:52.854 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:52.854 traddr: 10.0.0.2 00:11:52.854 eflags: none 00:11:52.854 sectype: none 00:11:52.854 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:52.854 Perform nvmf subsystem discovery via RPC 00:11:52.854 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:52.854 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.854 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.854 [ 00:11:52.854 { 00:11:52.854 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:52.854 "subtype": "Discovery", 00:11:52.854 "listen_addresses": [ 00:11:52.854 { 00:11:52.855 "trtype": "TCP", 00:11:52.855 "adrfam": "IPv4", 00:11:52.855 "traddr": "10.0.0.2", 00:11:52.855 "trsvcid": "4420" 00:11:52.855 } 00:11:52.855 ], 00:11:52.855 "allow_any_host": true, 00:11:52.855 "hosts": [] 00:11:52.855 }, 00:11:52.855 { 00:11:52.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.855 "subtype": "NVMe", 00:11:52.855 "listen_addresses": [ 00:11:52.855 { 00:11:52.855 "trtype": "TCP", 00:11:52.855 "adrfam": "IPv4", 00:11:52.855 "traddr": "10.0.0.2", 00:11:52.855 "trsvcid": "4420" 00:11:52.855 } 00:11:52.855 ], 00:11:52.855 "allow_any_host": true, 00:11:52.855 "hosts": [], 00:11:52.855 "serial_number": "SPDK00000000000001", 00:11:52.855 "model_number": "SPDK bdev Controller", 00:11:52.855 "max_namespaces": 32, 00:11:52.855 "min_cntlid": 1, 00:11:52.855 "max_cntlid": 65519, 00:11:52.855 "namespaces": [ 00:11:52.855 { 00:11:52.855 "nsid": 1, 00:11:52.855 "bdev_name": "Null1", 00:11:52.855 "name": "Null1", 00:11:52.855 "nguid": "4AF4B39EB312487AB3DE26BF498C6DA8", 00:11:52.855 "uuid": "4af4b39e-b312-487a-b3de-26bf498c6da8" 00:11:52.855 } 00:11:52.855 ] 00:11:52.855 }, 00:11:52.855 { 00:11:52.855 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:52.855 "subtype": "NVMe", 00:11:52.855 "listen_addresses": [ 00:11:52.855 { 00:11:52.855 "trtype": "TCP", 00:11:52.855 "adrfam": "IPv4", 00:11:52.855 "traddr": "10.0.0.2", 00:11:52.855 "trsvcid": "4420" 00:11:52.855 } 00:11:52.855 ], 00:11:52.855 "allow_any_host": true, 00:11:52.855 "hosts": [], 00:11:52.855 "serial_number": "SPDK00000000000002", 00:11:52.855 "model_number": "SPDK bdev Controller", 00:11:52.855 "max_namespaces": 32, 00:11:52.855 "min_cntlid": 1, 00:11:52.855 "max_cntlid": 65519, 00:11:52.855 "namespaces": [ 00:11:52.855 { 00:11:52.855 "nsid": 1, 00:11:52.855 "bdev_name": "Null2", 00:11:52.855 "name": "Null2", 00:11:52.855 "nguid": "F14C926519434FA38B46DB5B46816EDD", 00:11:52.855 "uuid": "f14c9265-1943-4fa3-8b46-db5b46816edd" 00:11:52.855 } 00:11:52.855 ] 00:11:52.855 }, 00:11:52.855 { 00:11:52.855 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:52.855 "subtype": "NVMe", 00:11:52.855 "listen_addresses": [ 00:11:52.855 { 00:11:52.855 "trtype": "TCP", 00:11:52.855 "adrfam": "IPv4", 00:11:52.855 "traddr": "10.0.0.2", 00:11:52.855 "trsvcid": "4420" 00:11:52.855 } 00:11:52.855 ], 00:11:52.855 "allow_any_host": true, 00:11:52.855 "hosts": [], 00:11:52.855 "serial_number": "SPDK00000000000003", 00:11:52.855 "model_number": "SPDK bdev Controller", 00:11:52.855 "max_namespaces": 32, 00:11:52.855 "min_cntlid": 1, 00:11:52.855 "max_cntlid": 65519, 00:11:52.855 "namespaces": [ 00:11:52.855 { 00:11:52.855 "nsid": 1, 00:11:52.855 "bdev_name": "Null3", 00:11:52.855 "name": "Null3", 00:11:52.855 "nguid": "11B33A97114D44B9AC4228932F146767", 00:11:52.855 "uuid": "11b33a97-114d-44b9-ac42-28932f146767" 00:11:52.855 } 00:11:52.855 ] 00:11:52.855 }, 00:11:52.855 { 00:11:52.855 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:52.855 "subtype": "NVMe", 00:11:52.855 "listen_addresses": [ 00:11:52.855 { 00:11:52.855 "trtype": "TCP", 00:11:52.855 "adrfam": "IPv4", 00:11:52.855 "traddr": "10.0.0.2", 00:11:52.855 "trsvcid": "4420" 00:11:52.855 } 00:11:52.855 ], 00:11:52.855 "allow_any_host": true, 00:11:52.855 "hosts": [], 00:11:52.855 "serial_number": "SPDK00000000000004", 00:11:52.855 "model_number": "SPDK bdev Controller", 00:11:52.855 "max_namespaces": 32, 00:11:52.855 "min_cntlid": 1, 00:11:52.855 "max_cntlid": 65519, 00:11:52.855 "namespaces": [ 00:11:52.855 { 00:11:52.855 "nsid": 1, 00:11:52.855 "bdev_name": "Null4", 00:11:52.855 "name": "Null4", 00:11:52.855 "nguid": "1308C28E64B44E77AE6DBA0D9C23580B", 00:11:52.855 "uuid": "1308c28e-64b4-4e77-ae6d-ba0d9c23580b" 00:11:52.855 } 00:11:52.855 ] 00:11:52.855 } 00:11:52.855 ] 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.855 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.855 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:52.855 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:52.855 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:52.855 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:52.856 rmmod nvme_tcp 00:11:52.856 rmmod nvme_fabrics 00:11:52.856 rmmod nvme_keyring 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1449826 ']' 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1449826 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1449826 ']' 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1449826 00:11:52.856 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1449826 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1449826' 00:11:53.116 killing process with pid 1449826 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1449826 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1449826 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.116 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:55.719 00:11:55.719 real 0m10.913s 00:11:55.719 user 0m7.995s 00:11:55.719 sys 0m5.814s 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.719 ************************************ 00:11:55.719 END TEST nvmf_target_discovery 00:11:55.719 ************************************ 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.719 ************************************ 00:11:55.719 START TEST nvmf_referrals 00:11:55.719 ************************************ 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:55.719 * Looking for test storage... 00:11:55.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.719 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.720 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.720 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.720 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.720 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:55.720 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:55.720 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:55.720 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:02.286 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:02.287 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:02.287 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:02.287 Found net devices under 0000:af:00.0: cvl_0_0 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:02.287 Found net devices under 0000:af:00.1: cvl_0_1 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:02.287 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:02.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:12:02.546 00:12:02.546 --- 10.0.0.2 ping statistics --- 00:12:02.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.546 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:12:02.546 00:12:02.546 --- 10.0.0.1 ping statistics --- 00:12:02.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.546 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1453850 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1453850 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1453850 ']' 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:02.546 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.546 [2024-07-24 19:12:48.687062] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:02.546 [2024-07-24 19:12:48.687107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.546 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.546 [2024-07-24 19:12:48.761399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.805 [2024-07-24 19:12:48.836827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.805 [2024-07-24 19:12:48.836869] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.805 [2024-07-24 19:12:48.836883] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.805 [2024-07-24 19:12:48.836893] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.805 [2024-07-24 19:12:48.836902] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.805 [2024-07-24 19:12:48.836956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.805 [2024-07-24 19:12:48.837051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.805 [2024-07-24 19:12:48.837139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.805 [2024-07-24 19:12:48.837142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.373 [2024-07-24 19:12:49.550104] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.373 [2024-07-24 19:12:49.566332] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.373 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:03.632 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.633 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.892 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:03.892 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:03.892 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:03.892 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:03.892 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:03.892 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:03.892 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:03.892 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:04.151 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:04.151 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:04.151 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:04.151 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:04.151 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:04.152 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.152 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:04.152 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:04.152 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:04.152 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:04.152 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:04.152 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.152 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.411 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.671 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.931 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.931 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:04.931 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:04.931 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:04.931 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:04.931 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.931 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:04.931 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:04.931 rmmod nvme_tcp 00:12:04.931 rmmod nvme_fabrics 00:12:04.931 rmmod nvme_keyring 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1453850 ']' 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1453850 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1453850 ']' 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1453850 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1453850 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1453850' 00:12:04.931 killing process with pid 1453850 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1453850 00:12:04.931 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1453850 00:12:05.190 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:05.190 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:05.190 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:05.190 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:05.190 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:05.190 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.190 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.190 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:07.728 00:12:07.728 real 0m11.927s 00:12:07.728 user 0m12.643s 00:12:07.728 sys 0m6.138s 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:07.728 ************************************ 00:12:07.728 END TEST nvmf_referrals 00:12:07.728 ************************************ 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.728 ************************************ 00:12:07.728 START TEST nvmf_connect_disconnect 00:12:07.728 ************************************ 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:07.728 * Looking for test storage... 00:12:07.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:07.728 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:07.729 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.300 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:14.300 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:14.300 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:14.300 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:14.300 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:14.300 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:14.301 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:14.301 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:14.301 Found net devices under 0000:af:00.0: cvl_0_0 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:14.301 Found net devices under 0000:af:00.1: cvl_0_1 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.301 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:14.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:12:14.561 00:12:14.561 --- 10.0.0.2 ping statistics --- 00:12:14.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.561 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:12:14.561 00:12:14.561 --- 10.0.0.1 ping statistics --- 00:12:14.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.561 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1458089 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1458089 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1458089 ']' 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.561 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 [2024-07-24 19:13:00.647139] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:14.561 [2024-07-24 19:13:00.647186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.561 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.561 [2024-07-24 19:13:00.720380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.561 [2024-07-24 19:13:00.789257] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.561 [2024-07-24 19:13:00.789301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.561 [2024-07-24 19:13:00.789316] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.561 [2024-07-24 19:13:00.789327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.562 [2024-07-24 19:13:00.789336] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.562 [2024-07-24 19:13:00.789393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.562 [2024-07-24 19:13:00.789489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.562 [2024-07-24 19:13:00.789571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.562 [2024-07-24 19:13:00.789575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:15.500 [2024-07-24 19:13:01.511045] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:15.500 [2024-07-24 19:13:01.565854] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:15.500 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:18.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:32.907 rmmod nvme_tcp 00:12:32.907 rmmod nvme_fabrics 00:12:32.907 rmmod nvme_keyring 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1458089 ']' 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1458089 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1458089 ']' 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1458089 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1458089 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1458089' 00:12:32.907 killing process with pid 1458089 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1458089 00:12:32.907 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1458089 00:12:33.166 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:33.166 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:33.166 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:33.166 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.166 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.166 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.166 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.167 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.072 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:35.072 00:12:35.072 real 0m27.751s 00:12:35.072 user 1m14.539s 00:12:35.072 sys 0m7.170s 00:12:35.072 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:35.072 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.072 ************************************ 00:12:35.072 END TEST nvmf_connect_disconnect 00:12:35.072 ************************************ 00:12:35.072 19:13:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:35.072 19:13:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:35.072 19:13:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.072 19:13:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.331 ************************************ 00:12:35.331 START TEST nvmf_multitarget 00:12:35.331 ************************************ 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:35.331 * Looking for test storage... 00:12:35.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.331 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:35.332 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:41.909 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:41.910 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:41.910 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:41.910 Found net devices under 0000:af:00.0: cvl_0_0 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:41.910 Found net devices under 0000:af:00.1: cvl_0_1 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.910 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:42.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:12:42.170 00:12:42.170 --- 10.0.0.2 ping statistics --- 00:12:42.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.170 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:12:42.170 00:12:42.170 --- 10.0.0.1 ping statistics --- 00:12:42.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.170 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:42.170 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:42.429 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:42.429 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.429 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:42.429 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.429 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1464996 00:12:42.429 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.429 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1464996 00:12:42.429 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1464996 ']' 00:12:42.429 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.429 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:42.430 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.430 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:42.430 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.430 [2024-07-24 19:13:28.478082] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:42.430 [2024-07-24 19:13:28.478129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.430 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.430 [2024-07-24 19:13:28.552717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.430 [2024-07-24 19:13:28.622103] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.430 [2024-07-24 19:13:28.622147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.430 [2024-07-24 19:13:28.622161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.430 [2024-07-24 19:13:28.622172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.430 [2024-07-24 19:13:28.622181] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.430 [2024-07-24 19:13:28.622236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.430 [2024-07-24 19:13:28.622323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.430 [2024-07-24 19:13:28.622410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.430 [2024-07-24 19:13:28.622413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.063 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:43.063 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:43.063 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.063 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:43.063 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:43.322 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.322 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:43.322 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:43.322 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:43.322 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:43.322 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:43.322 "nvmf_tgt_1" 00:12:43.322 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:43.581 "nvmf_tgt_2" 00:12:43.581 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:43.581 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:43.581 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:43.582 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:43.841 true 00:12:43.841 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:43.841 true 00:12:43.841 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:43.841 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:43.841 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:43.841 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:43.841 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:43.841 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:43.841 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:43.841 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:43.841 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:43.841 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:43.841 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.101 rmmod nvme_tcp 00:12:44.101 rmmod nvme_fabrics 00:12:44.101 rmmod nvme_keyring 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1464996 ']' 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1464996 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1464996 ']' 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1464996 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464996 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464996' 00:12:44.101 killing process with pid 1464996 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1464996 00:12:44.101 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1464996 00:12:44.360 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:44.360 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:44.360 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:44.360 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.360 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:44.361 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.361 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.361 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.268 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:46.268 00:12:46.268 real 0m11.095s 00:12:46.268 user 0m9.447s 00:12:46.268 sys 0m5.919s 00:12:46.268 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.268 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:46.268 ************************************ 00:12:46.268 END TEST nvmf_multitarget 00:12:46.268 ************************************ 00:12:46.268 19:13:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:46.268 19:13:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:46.268 19:13:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.268 19:13:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:46.528 ************************************ 00:12:46.528 START TEST nvmf_rpc 00:12:46.528 ************************************ 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:46.528 * Looking for test storage... 00:12:46.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:46.528 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:54.651 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:54.651 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.651 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:54.652 Found net devices under 0000:af:00.0: cvl_0_0 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:54.652 Found net devices under 0000:af:00.1: cvl_0_1 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:54.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:12:54.652 00:12:54.652 --- 10.0.0.2 ping statistics --- 00:12:54.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.652 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:12:54.652 00:12:54.652 --- 10.0.0.1 ping statistics --- 00:12:54.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.652 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1469007 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1469007 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1469007 ']' 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.652 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.653 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.653 [2024-07-24 19:13:39.952416] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:54.653 [2024-07-24 19:13:39.952469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.653 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.653 [2024-07-24 19:13:40.027871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.653 [2024-07-24 19:13:40.110694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.653 [2024-07-24 19:13:40.110736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.653 [2024-07-24 19:13:40.110750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.653 [2024-07-24 19:13:40.110762] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.653 [2024-07-24 19:13:40.110771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.653 [2024-07-24 19:13:40.110822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.653 [2024-07-24 19:13:40.110919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.653 [2024-07-24 19:13:40.111006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.653 [2024-07-24 19:13:40.111010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:54.653 "tick_rate": 2500000000, 00:12:54.653 "poll_groups": [ 00:12:54.653 { 00:12:54.653 "name": "nvmf_tgt_poll_group_000", 00:12:54.653 "admin_qpairs": 0, 00:12:54.653 "io_qpairs": 0, 00:12:54.653 "current_admin_qpairs": 0, 00:12:54.653 "current_io_qpairs": 0, 00:12:54.653 "pending_bdev_io": 0, 00:12:54.653 "completed_nvme_io": 0, 00:12:54.653 "transports": [] 00:12:54.653 }, 00:12:54.653 { 00:12:54.653 "name": "nvmf_tgt_poll_group_001", 00:12:54.653 "admin_qpairs": 0, 00:12:54.653 "io_qpairs": 0, 00:12:54.653 "current_admin_qpairs": 0, 00:12:54.653 "current_io_qpairs": 0, 00:12:54.653 "pending_bdev_io": 0, 00:12:54.653 "completed_nvme_io": 0, 00:12:54.653 "transports": [] 00:12:54.653 }, 00:12:54.653 { 00:12:54.653 "name": "nvmf_tgt_poll_group_002", 00:12:54.653 "admin_qpairs": 0, 00:12:54.653 "io_qpairs": 0, 00:12:54.653 "current_admin_qpairs": 0, 00:12:54.653 "current_io_qpairs": 0, 00:12:54.653 "pending_bdev_io": 0, 00:12:54.653 "completed_nvme_io": 0, 00:12:54.653 "transports": [] 00:12:54.653 }, 00:12:54.653 { 00:12:54.653 "name": "nvmf_tgt_poll_group_003", 00:12:54.653 "admin_qpairs": 0, 00:12:54.653 "io_qpairs": 0, 00:12:54.653 "current_admin_qpairs": 0, 00:12:54.653 "current_io_qpairs": 0, 00:12:54.653 "pending_bdev_io": 0, 00:12:54.653 "completed_nvme_io": 0, 00:12:54.653 "transports": [] 00:12:54.653 } 00:12:54.653 ] 00:12:54.653 }' 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:54.653 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.913 [2024-07-24 19:13:40.939488] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:54.913 "tick_rate": 2500000000, 00:12:54.913 "poll_groups": [ 00:12:54.913 { 00:12:54.913 "name": "nvmf_tgt_poll_group_000", 00:12:54.913 "admin_qpairs": 0, 00:12:54.913 "io_qpairs": 0, 00:12:54.913 "current_admin_qpairs": 0, 00:12:54.913 "current_io_qpairs": 0, 00:12:54.913 "pending_bdev_io": 0, 00:12:54.913 "completed_nvme_io": 0, 00:12:54.913 "transports": [ 00:12:54.913 { 00:12:54.913 "trtype": "TCP" 00:12:54.913 } 00:12:54.913 ] 00:12:54.913 }, 00:12:54.913 { 00:12:54.913 "name": "nvmf_tgt_poll_group_001", 00:12:54.913 "admin_qpairs": 0, 00:12:54.913 "io_qpairs": 0, 00:12:54.913 "current_admin_qpairs": 0, 00:12:54.913 "current_io_qpairs": 0, 00:12:54.913 "pending_bdev_io": 0, 00:12:54.913 "completed_nvme_io": 0, 00:12:54.913 "transports": [ 00:12:54.913 { 00:12:54.913 "trtype": "TCP" 00:12:54.913 } 00:12:54.913 ] 00:12:54.913 }, 00:12:54.913 { 00:12:54.913 "name": "nvmf_tgt_poll_group_002", 00:12:54.913 "admin_qpairs": 0, 00:12:54.913 "io_qpairs": 0, 00:12:54.913 "current_admin_qpairs": 0, 00:12:54.913 "current_io_qpairs": 0, 00:12:54.913 "pending_bdev_io": 0, 00:12:54.913 "completed_nvme_io": 0, 00:12:54.913 "transports": [ 00:12:54.913 { 00:12:54.913 "trtype": "TCP" 00:12:54.913 } 00:12:54.913 ] 00:12:54.913 }, 00:12:54.913 { 00:12:54.913 "name": "nvmf_tgt_poll_group_003", 00:12:54.913 "admin_qpairs": 0, 00:12:54.913 "io_qpairs": 0, 00:12:54.913 "current_admin_qpairs": 0, 00:12:54.913 "current_io_qpairs": 0, 00:12:54.913 "pending_bdev_io": 0, 00:12:54.913 "completed_nvme_io": 0, 00:12:54.913 "transports": [ 00:12:54.913 { 00:12:54.913 "trtype": "TCP" 00:12:54.913 } 00:12:54.913 ] 00:12:54.913 } 00:12:54.913 ] 00:12:54.913 }' 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:54.913 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.913 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:54.913 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:54.913 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:54.913 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:54.913 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.913 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:54.913 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:54.913 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:54.913 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:54.913 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:54.913 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.913 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.913 Malloc1 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.914 [2024-07-24 19:13:41.126620] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:54.914 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:12:55.173 [2024-07-24 19:13:41.161302] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:12:55.173 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:55.173 could not add new controller: failed to write to nvme-fabrics device 00:12:55.173 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:55.173 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:55.173 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:55.173 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:55.173 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:55.173 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.173 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.173 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.173 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.553 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.553 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.553 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.553 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:56.553 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:58.457 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.457 [2024-07-24 19:13:44.684264] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:12:58.716 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:58.716 could not add new controller: failed to write to nvme-fabrics device 00:12:58.716 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:58.716 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.716 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.716 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.716 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:58.716 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.716 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.716 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.716 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.099 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.099 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:00.099 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.099 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:00.099 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:02.005 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:02.005 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:02.005 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.005 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:02.005 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.005 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:02.005 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.006 [2024-07-24 19:13:48.237186] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.006 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.265 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.265 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.265 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.265 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.265 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.265 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.682 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.682 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:03.682 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.682 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:03.682 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.589 [2024-07-24 19:13:51.790217] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.589 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.968 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.968 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.968 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.968 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:06.968 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.505 [2024-07-24 19:13:55.330141] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.505 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.885 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.885 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.885 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.885 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:10.885 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.795 [2024-07-24 19:13:58.866589] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.795 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.174 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.174 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:14.174 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.174 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:14.174 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:16.081 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:16.081 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:16.081 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.081 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:16.081 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.081 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:16.081 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.340 [2024-07-24 19:14:02.388667] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.340 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.719 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.719 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:17.719 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.719 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:17.719 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.625 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 [2024-07-24 19:14:05.872830] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 [2024-07-24 19:14:05.920935] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 [2024-07-24 19:14:05.973104] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.885 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.885 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:19.885 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.885 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.885 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.885 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 [2024-07-24 19:14:06.021226] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 [2024-07-24 19:14:06.069382] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:20.146 "tick_rate": 2500000000, 00:13:20.146 "poll_groups": [ 00:13:20.146 { 00:13:20.146 "name": "nvmf_tgt_poll_group_000", 00:13:20.146 "admin_qpairs": 2, 00:13:20.146 "io_qpairs": 196, 00:13:20.146 "current_admin_qpairs": 0, 00:13:20.146 "current_io_qpairs": 0, 00:13:20.146 "pending_bdev_io": 0, 00:13:20.146 "completed_nvme_io": 318, 00:13:20.146 "transports": [ 00:13:20.146 { 00:13:20.146 "trtype": "TCP" 00:13:20.146 } 00:13:20.146 ] 00:13:20.146 }, 00:13:20.146 { 00:13:20.146 "name": "nvmf_tgt_poll_group_001", 00:13:20.146 "admin_qpairs": 2, 00:13:20.146 "io_qpairs": 196, 00:13:20.146 "current_admin_qpairs": 0, 00:13:20.146 "current_io_qpairs": 0, 00:13:20.146 "pending_bdev_io": 0, 00:13:20.146 "completed_nvme_io": 246, 00:13:20.146 "transports": [ 00:13:20.146 { 00:13:20.146 "trtype": "TCP" 00:13:20.146 } 00:13:20.146 ] 00:13:20.146 }, 00:13:20.146 { 00:13:20.146 "name": "nvmf_tgt_poll_group_002", 00:13:20.146 "admin_qpairs": 1, 00:13:20.146 "io_qpairs": 196, 00:13:20.146 "current_admin_qpairs": 0, 00:13:20.146 "current_io_qpairs": 0, 00:13:20.146 "pending_bdev_io": 0, 00:13:20.146 "completed_nvme_io": 322, 00:13:20.146 "transports": [ 00:13:20.146 { 00:13:20.146 "trtype": "TCP" 00:13:20.146 } 00:13:20.146 ] 00:13:20.146 }, 00:13:20.146 { 00:13:20.146 "name": "nvmf_tgt_poll_group_003", 00:13:20.146 "admin_qpairs": 2, 00:13:20.146 "io_qpairs": 196, 00:13:20.146 "current_admin_qpairs": 0, 00:13:20.146 "current_io_qpairs": 0, 00:13:20.146 "pending_bdev_io": 0, 00:13:20.146 "completed_nvme_io": 248, 00:13:20.146 "transports": [ 00:13:20.146 { 00:13:20.146 "trtype": "TCP" 00:13:20.146 } 00:13:20.146 ] 00:13:20.146 } 00:13:20.146 ] 00:13:20.146 }' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:20.146 rmmod nvme_tcp 00:13:20.146 rmmod nvme_fabrics 00:13:20.146 rmmod nvme_keyring 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1469007 ']' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1469007 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1469007 ']' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1469007 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1469007 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1469007' 00:13:20.146 killing process with pid 1469007 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1469007 00:13:20.146 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1469007 00:13:20.406 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.406 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:20.406 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:20.406 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.406 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:20.406 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.406 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.406 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:22.984 00:13:22.984 real 0m36.108s 00:13:22.984 user 1m46.686s 00:13:22.984 sys 0m8.453s 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.984 ************************************ 00:13:22.984 END TEST nvmf_rpc 00:13:22.984 ************************************ 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:22.984 ************************************ 00:13:22.984 START TEST nvmf_invalid 00:13:22.984 ************************************ 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:22.984 * Looking for test storage... 00:13:22.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.984 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:22.985 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:29.562 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:29.562 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:29.562 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:29.563 Found net devices under 0000:af:00.0: cvl_0_0 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:29.563 Found net devices under 0000:af:00.1: cvl_0_1 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:29.563 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:29.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:13:29.563 00:13:29.563 --- 10.0.0.2 ping statistics --- 00:13:29.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.563 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:13:29.563 00:13:29.563 --- 10.0.0.1 ping statistics --- 00:13:29.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.563 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1477079 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1477079 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1477079 ']' 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.563 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.563 [2024-07-24 19:14:15.188301] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:29.563 [2024-07-24 19:14:15.188350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.563 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.563 [2024-07-24 19:14:15.262182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.563 [2024-07-24 19:14:15.335853] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.563 [2024-07-24 19:14:15.335894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.563 [2024-07-24 19:14:15.335909] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.563 [2024-07-24 19:14:15.335921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.563 [2024-07-24 19:14:15.335930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.563 [2024-07-24 19:14:15.335981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.563 [2024-07-24 19:14:15.336004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.563 [2024-07-24 19:14:15.336104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.563 [2024-07-24 19:14:15.336108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.823 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.823 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:29.823 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.823 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:29.823 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.823 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.823 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:29.823 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1412 00:13:30.082 [2024-07-24 19:14:16.190309] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:30.082 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:30.082 { 00:13:30.082 "nqn": "nqn.2016-06.io.spdk:cnode1412", 00:13:30.082 "tgt_name": "foobar", 00:13:30.082 "method": "nvmf_create_subsystem", 00:13:30.082 "req_id": 1 00:13:30.082 } 00:13:30.082 Got JSON-RPC error response 00:13:30.082 response: 00:13:30.082 { 00:13:30.082 "code": -32603, 00:13:30.082 "message": "Unable to find target foobar" 00:13:30.082 }' 00:13:30.082 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:30.082 { 00:13:30.082 "nqn": "nqn.2016-06.io.spdk:cnode1412", 00:13:30.082 "tgt_name": "foobar", 00:13:30.082 "method": "nvmf_create_subsystem", 00:13:30.082 "req_id": 1 00:13:30.082 } 00:13:30.082 Got JSON-RPC error response 00:13:30.082 response: 00:13:30.082 { 00:13:30.082 "code": -32603, 00:13:30.082 "message": "Unable to find target foobar" 00:13:30.082 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:30.082 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:30.082 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1554 00:13:30.342 [2024-07-24 19:14:16.383034] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1554: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:30.342 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:30.342 { 00:13:30.342 "nqn": "nqn.2016-06.io.spdk:cnode1554", 00:13:30.342 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.342 "method": "nvmf_create_subsystem", 00:13:30.342 "req_id": 1 00:13:30.342 } 00:13:30.342 Got JSON-RPC error response 00:13:30.342 response: 00:13:30.342 { 00:13:30.342 "code": -32602, 00:13:30.342 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.342 }' 00:13:30.342 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:30.342 { 00:13:30.342 "nqn": "nqn.2016-06.io.spdk:cnode1554", 00:13:30.342 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.342 "method": "nvmf_create_subsystem", 00:13:30.342 "req_id": 1 00:13:30.342 } 00:13:30.342 Got JSON-RPC error response 00:13:30.342 response: 00:13:30.342 { 00:13:30.342 "code": -32602, 00:13:30.342 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.342 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.342 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:30.342 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12690 00:13:30.342 [2024-07-24 19:14:16.555529] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12690: invalid model number 'SPDK_Controller' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:30.602 { 00:13:30.602 "nqn": "nqn.2016-06.io.spdk:cnode12690", 00:13:30.602 "model_number": "SPDK_Controller\u001f", 00:13:30.602 "method": "nvmf_create_subsystem", 00:13:30.602 "req_id": 1 00:13:30.602 } 00:13:30.602 Got JSON-RPC error response 00:13:30.602 response: 00:13:30.602 { 00:13:30.602 "code": -32602, 00:13:30.602 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.602 }' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:30.602 { 00:13:30.602 "nqn": "nqn.2016-06.io.spdk:cnode12690", 00:13:30.602 "model_number": "SPDK_Controller\u001f", 00:13:30.602 "method": "nvmf_create_subsystem", 00:13:30.602 "req_id": 1 00:13:30.602 } 00:13:30.602 Got JSON-RPC error response 00:13:30.602 response: 00:13:30.602 { 00:13:30.602 "code": -32602, 00:13:30.602 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.602 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:30.602 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '"Tv_b\m9JX'\'')"1uG!dS7' 00:13:30.603 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '"Tv_b\m9JX'\'')"1uG!dS7' nqn.2016-06.io.spdk:cnode13147 00:13:30.863 [2024-07-24 19:14:16.908692] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13147: invalid serial number '"Tv_b\m9JX')"1uG!dS7' 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:30.863 { 00:13:30.863 "nqn": "nqn.2016-06.io.spdk:cnode13147", 00:13:30.863 "serial_number": "\"Tv_b\\m9JX\u007f'\'')\"1uG!dS7", 00:13:30.863 "method": "nvmf_create_subsystem", 00:13:30.863 "req_id": 1 00:13:30.863 } 00:13:30.863 Got JSON-RPC error response 00:13:30.863 response: 00:13:30.863 { 00:13:30.863 "code": -32602, 00:13:30.863 "message": "Invalid SN \"Tv_b\\m9JX\u007f'\'')\"1uG!dS7" 00:13:30.863 }' 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:30.863 { 00:13:30.863 "nqn": "nqn.2016-06.io.spdk:cnode13147", 00:13:30.863 "serial_number": "\"Tv_b\\m9JX\u007f')\"1uG!dS7", 00:13:30.863 "method": "nvmf_create_subsystem", 00:13:30.863 "req_id": 1 00:13:30.863 } 00:13:30.863 Got JSON-RPC error response 00:13:30.863 response: 00:13:30.863 { 00:13:30.863 "code": -32602, 00:13:30.863 "message": "Invalid SN \"Tv_b\\m9JX\u007f')\"1uG!dS7" 00:13:30.863 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:30.863 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.864 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:31.124 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ | == \- ]] 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '|NDEUlF&TC9/9wc4[L!U}/UCe|*Cvz\IR@Q 5\+~=' 00:13:31.125 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '|NDEUlF&TC9/9wc4[L!U}/UCe|*Cvz\IR@Q 5\+~=' nqn.2016-06.io.spdk:cnode19399 00:13:31.384 [2024-07-24 19:14:17.422391] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19399: invalid model number '|NDEUlF&TC9/9wc4[L!U}/UCe|*Cvz\IR@Q 5\+~=' 00:13:31.384 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:31.384 { 00:13:31.384 "nqn": "nqn.2016-06.io.spdk:cnode19399", 00:13:31.384 "model_number": "|NDEUlF&TC9/9wc4[L!U}/UCe|*Cvz\\IR@Q 5\\+~=", 00:13:31.384 "method": "nvmf_create_subsystem", 00:13:31.384 "req_id": 1 00:13:31.384 } 00:13:31.384 Got JSON-RPC error response 00:13:31.384 response: 00:13:31.384 { 00:13:31.384 "code": -32602, 00:13:31.384 "message": "Invalid MN |NDEUlF&TC9/9wc4[L!U}/UCe|*Cvz\\IR@Q 5\\+~=" 00:13:31.384 }' 00:13:31.384 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:31.384 { 00:13:31.384 "nqn": "nqn.2016-06.io.spdk:cnode19399", 00:13:31.384 "model_number": "|NDEUlF&TC9/9wc4[L!U}/UCe|*Cvz\\IR@Q 5\\+~=", 00:13:31.384 "method": "nvmf_create_subsystem", 00:13:31.384 "req_id": 1 00:13:31.384 } 00:13:31.384 Got JSON-RPC error response 00:13:31.384 response: 00:13:31.384 { 00:13:31.384 "code": -32602, 00:13:31.384 "message": "Invalid MN |NDEUlF&TC9/9wc4[L!U}/UCe|*Cvz\\IR@Q 5\\+~=" 00:13:31.384 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:31.384 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:31.384 [2024-07-24 19:14:17.603063] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.644 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:31.644 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:31.644 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:31.644 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:31.644 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:31.644 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:31.903 [2024-07-24 19:14:17.984351] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:31.903 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:31.903 { 00:13:31.903 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:31.903 "listen_address": { 00:13:31.903 "trtype": "tcp", 00:13:31.903 "traddr": "", 00:13:31.903 "trsvcid": "4421" 00:13:31.903 }, 00:13:31.903 "method": "nvmf_subsystem_remove_listener", 00:13:31.903 "req_id": 1 00:13:31.903 } 00:13:31.903 Got JSON-RPC error response 00:13:31.903 response: 00:13:31.903 { 00:13:31.903 "code": -32602, 00:13:31.903 "message": "Invalid parameters" 00:13:31.903 }' 00:13:31.903 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:31.903 { 00:13:31.903 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:31.903 "listen_address": { 00:13:31.903 "trtype": "tcp", 00:13:31.903 "traddr": "", 00:13:31.903 "trsvcid": "4421" 00:13:31.903 }, 00:13:31.903 "method": "nvmf_subsystem_remove_listener", 00:13:31.903 "req_id": 1 00:13:31.903 } 00:13:31.903 Got JSON-RPC error response 00:13:31.903 response: 00:13:31.903 { 00:13:31.903 "code": -32602, 00:13:31.903 "message": "Invalid parameters" 00:13:31.903 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:31.903 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7134 -i 0 00:13:32.162 [2024-07-24 19:14:18.168932] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7134: invalid cntlid range [0-65519] 00:13:32.162 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:32.162 { 00:13:32.162 "nqn": "nqn.2016-06.io.spdk:cnode7134", 00:13:32.162 "min_cntlid": 0, 00:13:32.162 "method": "nvmf_create_subsystem", 00:13:32.162 "req_id": 1 00:13:32.162 } 00:13:32.162 Got JSON-RPC error response 00:13:32.162 response: 00:13:32.162 { 00:13:32.162 "code": -32602, 00:13:32.162 "message": "Invalid cntlid range [0-65519]" 00:13:32.162 }' 00:13:32.162 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:32.162 { 00:13:32.163 "nqn": "nqn.2016-06.io.spdk:cnode7134", 00:13:32.163 "min_cntlid": 0, 00:13:32.163 "method": "nvmf_create_subsystem", 00:13:32.163 "req_id": 1 00:13:32.163 } 00:13:32.163 Got JSON-RPC error response 00:13:32.163 response: 00:13:32.163 { 00:13:32.163 "code": -32602, 00:13:32.163 "message": "Invalid cntlid range [0-65519]" 00:13:32.163 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.163 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18795 -i 65520 00:13:32.163 [2024-07-24 19:14:18.357561] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18795: invalid cntlid range [65520-65519] 00:13:32.163 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:32.163 { 00:13:32.163 "nqn": "nqn.2016-06.io.spdk:cnode18795", 00:13:32.163 "min_cntlid": 65520, 00:13:32.163 "method": "nvmf_create_subsystem", 00:13:32.163 "req_id": 1 00:13:32.163 } 00:13:32.163 Got JSON-RPC error response 00:13:32.163 response: 00:13:32.163 { 00:13:32.163 "code": -32602, 00:13:32.163 "message": "Invalid cntlid range [65520-65519]" 00:13:32.163 }' 00:13:32.163 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:32.163 { 00:13:32.163 "nqn": "nqn.2016-06.io.spdk:cnode18795", 00:13:32.163 "min_cntlid": 65520, 00:13:32.163 "method": "nvmf_create_subsystem", 00:13:32.163 "req_id": 1 00:13:32.163 } 00:13:32.163 Got JSON-RPC error response 00:13:32.163 response: 00:13:32.163 { 00:13:32.163 "code": -32602, 00:13:32.163 "message": "Invalid cntlid range [65520-65519]" 00:13:32.163 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.163 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2050 -I 0 00:13:32.422 [2024-07-24 19:14:18.546176] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2050: invalid cntlid range [1-0] 00:13:32.422 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:32.422 { 00:13:32.422 "nqn": "nqn.2016-06.io.spdk:cnode2050", 00:13:32.422 "max_cntlid": 0, 00:13:32.422 "method": "nvmf_create_subsystem", 00:13:32.422 "req_id": 1 00:13:32.422 } 00:13:32.422 Got JSON-RPC error response 00:13:32.422 response: 00:13:32.422 { 00:13:32.422 "code": -32602, 00:13:32.422 "message": "Invalid cntlid range [1-0]" 00:13:32.422 }' 00:13:32.422 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:32.422 { 00:13:32.422 "nqn": "nqn.2016-06.io.spdk:cnode2050", 00:13:32.422 "max_cntlid": 0, 00:13:32.422 "method": "nvmf_create_subsystem", 00:13:32.422 "req_id": 1 00:13:32.422 } 00:13:32.422 Got JSON-RPC error response 00:13:32.422 response: 00:13:32.422 { 00:13:32.422 "code": -32602, 00:13:32.422 "message": "Invalid cntlid range [1-0]" 00:13:32.422 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.422 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5852 -I 65520 00:13:32.681 [2024-07-24 19:14:18.738828] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5852: invalid cntlid range [1-65520] 00:13:32.681 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:32.681 { 00:13:32.681 "nqn": "nqn.2016-06.io.spdk:cnode5852", 00:13:32.681 "max_cntlid": 65520, 00:13:32.681 "method": "nvmf_create_subsystem", 00:13:32.681 "req_id": 1 00:13:32.681 } 00:13:32.681 Got JSON-RPC error response 00:13:32.681 response: 00:13:32.681 { 00:13:32.681 "code": -32602, 00:13:32.681 "message": "Invalid cntlid range [1-65520]" 00:13:32.681 }' 00:13:32.681 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:32.681 { 00:13:32.681 "nqn": "nqn.2016-06.io.spdk:cnode5852", 00:13:32.681 "max_cntlid": 65520, 00:13:32.681 "method": "nvmf_create_subsystem", 00:13:32.681 "req_id": 1 00:13:32.681 } 00:13:32.681 Got JSON-RPC error response 00:13:32.681 response: 00:13:32.681 { 00:13:32.681 "code": -32602, 00:13:32.681 "message": "Invalid cntlid range [1-65520]" 00:13:32.681 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.681 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3655 -i 6 -I 5 00:13:32.681 [2024-07-24 19:14:18.915408] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3655: invalid cntlid range [6-5] 00:13:32.941 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:32.941 { 00:13:32.941 "nqn": "nqn.2016-06.io.spdk:cnode3655", 00:13:32.941 "min_cntlid": 6, 00:13:32.941 "max_cntlid": 5, 00:13:32.941 "method": "nvmf_create_subsystem", 00:13:32.941 "req_id": 1 00:13:32.941 } 00:13:32.941 Got JSON-RPC error response 00:13:32.941 response: 00:13:32.941 { 00:13:32.941 "code": -32602, 00:13:32.941 "message": "Invalid cntlid range [6-5]" 00:13:32.941 }' 00:13:32.941 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:32.941 { 00:13:32.941 "nqn": "nqn.2016-06.io.spdk:cnode3655", 00:13:32.941 "min_cntlid": 6, 00:13:32.941 "max_cntlid": 5, 00:13:32.941 "method": "nvmf_create_subsystem", 00:13:32.941 "req_id": 1 00:13:32.941 } 00:13:32.941 Got JSON-RPC error response 00:13:32.941 response: 00:13:32.941 { 00:13:32.941 "code": -32602, 00:13:32.941 "message": "Invalid cntlid range [6-5]" 00:13:32.941 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.941 19:14:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:32.941 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:32.941 { 00:13:32.941 "name": "foobar", 00:13:32.941 "method": "nvmf_delete_target", 00:13:32.941 "req_id": 1 00:13:32.941 } 00:13:32.941 Got JSON-RPC error response 00:13:32.941 response: 00:13:32.941 { 00:13:32.941 "code": -32602, 00:13:32.941 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:32.941 }' 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:32.942 { 00:13:32.942 "name": "foobar", 00:13:32.942 "method": "nvmf_delete_target", 00:13:32.942 "req_id": 1 00:13:32.942 } 00:13:32.942 Got JSON-RPC error response 00:13:32.942 response: 00:13:32.942 { 00:13:32.942 "code": -32602, 00:13:32.942 "message": "The specified target doesn't exist, cannot delete it." 00:13:32.942 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.942 rmmod nvme_tcp 00:13:32.942 rmmod nvme_fabrics 00:13:32.942 rmmod nvme_keyring 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1477079 ']' 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1477079 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1477079 ']' 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1477079 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1477079 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1477079' 00:13:32.942 killing process with pid 1477079 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1477079 00:13:32.942 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1477079 00:13:33.201 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:33.201 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:33.201 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:33.201 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:33.201 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:33.201 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.201 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.201 19:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:35.739 00:13:35.739 real 0m12.695s 00:13:35.739 user 0m19.761s 00:13:35.739 sys 0m5.902s 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:35.739 ************************************ 00:13:35.739 END TEST nvmf_invalid 00:13:35.739 ************************************ 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.739 ************************************ 00:13:35.739 START TEST nvmf_connect_stress 00:13:35.739 ************************************ 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:35.739 * Looking for test storage... 00:13:35.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:35.739 19:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.309 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:42.310 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:42.310 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:42.310 Found net devices under 0000:af:00.0: cvl_0_0 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:42.310 Found net devices under 0000:af:00.1: cvl_0_1 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:42.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:13:42.310 00:13:42.310 --- 10.0.0.2 ping statistics --- 00:13:42.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.310 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:13:42.310 00:13:42.310 --- 10.0.0.1 ping statistics --- 00:13:42.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.310 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:42.310 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.311 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1481710 00:13:42.311 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:42.311 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1481710 00:13:42.311 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1481710 ']' 00:13:42.311 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.311 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.311 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.311 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.311 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.311 [2024-07-24 19:14:28.543241] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:42.311 [2024-07-24 19:14:28.543290] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.570 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.570 [2024-07-24 19:14:28.617954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:42.570 [2024-07-24 19:14:28.686999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.570 [2024-07-24 19:14:28.687044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.570 [2024-07-24 19:14:28.687054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.570 [2024-07-24 19:14:28.687063] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.570 [2024-07-24 19:14:28.687069] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.570 [2024-07-24 19:14:28.687201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.570 [2024-07-24 19:14:28.687266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.570 [2024-07-24 19:14:28.687267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.138 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.138 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:43.138 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.138 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:43.138 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.397 [2024-07-24 19:14:29.399026] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.397 [2024-07-24 19:14:29.439895] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.397 NULL1 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1481806 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:43.397 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.398 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.696 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.696 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:43.696 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.696 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.696 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.979 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.979 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:43.979 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.979 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.979 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.547 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.547 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:44.547 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.547 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.547 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.806 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.806 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:44.806 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.806 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.806 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.065 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.065 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:45.065 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.065 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.065 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.324 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.324 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:45.324 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.324 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.324 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.892 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.892 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:45.892 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.892 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.892 19:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.151 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.151 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:46.151 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.151 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.151 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.410 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.410 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:46.410 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.410 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.410 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.668 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.668 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:46.668 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.668 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.668 19:14:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.927 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.927 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:46.927 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.927 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.927 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.495 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.495 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:47.495 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.495 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.495 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.754 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.754 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:47.754 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.754 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.754 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.013 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.013 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:48.013 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.013 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.013 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.271 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.271 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:48.271 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.271 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.271 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.530 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.530 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:48.530 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.530 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.530 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.098 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.098 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:49.098 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.098 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.098 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.358 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.358 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:49.358 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.358 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.358 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.617 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.617 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:49.617 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.617 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.617 19:14:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.877 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.877 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:49.877 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.877 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.877 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.136 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.136 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:50.136 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.136 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.136 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.704 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.704 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:50.704 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.704 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.704 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.963 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.963 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:50.963 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.963 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.963 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.222 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.222 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:51.222 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.222 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.222 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.482 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.482 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:51.482 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.482 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.482 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.741 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.741 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:51.741 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.741 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.741 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.310 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.310 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:52.310 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.310 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.310 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.569 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.569 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:52.569 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.569 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.569 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.828 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.828 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:52.828 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.828 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.828 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.087 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.087 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:53.087 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.087 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.087 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.346 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.346 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:53.346 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.346 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.346 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.346 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:53.915 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.915 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1481806 00:13:53.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1481806) - No such process 00:13:53.915 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1481806 00:13:53.915 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:53.915 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:53.915 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:53.916 rmmod nvme_tcp 00:13:53.916 rmmod nvme_fabrics 00:13:53.916 rmmod nvme_keyring 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1481710 ']' 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1481710 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1481710 ']' 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1481710 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1481710 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1481710' 00:13:53.916 killing process with pid 1481710 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1481710 00:13:53.916 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1481710 00:13:54.175 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.175 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.175 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.175 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.175 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.175 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.175 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.175 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.082 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:56.082 00:13:56.082 real 0m20.748s 00:13:56.082 user 0m41.244s 00:13:56.082 sys 0m10.138s 00:13:56.082 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.082 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.082 ************************************ 00:13:56.082 END TEST nvmf_connect_stress 00:13:56.082 ************************************ 00:13:56.082 19:14:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:56.082 19:14:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:56.082 19:14:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.082 19:14:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:56.342 ************************************ 00:13:56.342 START TEST nvmf_fused_ordering 00:13:56.342 ************************************ 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:56.342 * Looking for test storage... 00:13:56.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.342 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:56.343 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.982 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.982 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:02.983 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:02.983 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:02.983 Found net devices under 0000:af:00.0: cvl_0_0 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:02.983 Found net devices under 0000:af:00.1: cvl_0_1 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.983 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.983 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.983 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.983 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:14:02.983 00:14:02.983 --- 10.0.0.2 ping statistics --- 00:14:02.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.983 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:14:02.983 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:14:02.983 00:14:02.984 --- 10.0.0.1 ping statistics --- 00:14:02.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.984 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1487285 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1487285 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1487285 ']' 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.984 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 [2024-07-24 19:14:49.144864] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:02.984 [2024-07-24 19:14:49.144914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.984 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.984 [2024-07-24 19:14:49.218469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.243 [2024-07-24 19:14:49.290779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.243 [2024-07-24 19:14:49.290816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.243 [2024-07-24 19:14:49.290826] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.243 [2024-07-24 19:14:49.290835] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.243 [2024-07-24 19:14:49.290842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.243 [2024-07-24 19:14:49.290872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.812 [2024-07-24 19:14:49.989120] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.812 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.812 [2024-07-24 19:14:50.005273] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.812 NULL1 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.812 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:03.812 [2024-07-24 19:14:50.049400] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:03.812 [2024-07-24 19:14:50.049433] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487418 ] 00:14:04.074 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.333 Attached to nqn.2016-06.io.spdk:cnode1 00:14:04.333 Namespace ID: 1 size: 1GB 00:14:04.333 fused_ordering(0) 00:14:04.333 fused_ordering(1) 00:14:04.333 fused_ordering(2) 00:14:04.333 fused_ordering(3) 00:14:04.333 fused_ordering(4) 00:14:04.333 fused_ordering(5) 00:14:04.333 fused_ordering(6) 00:14:04.333 fused_ordering(7) 00:14:04.333 fused_ordering(8) 00:14:04.333 fused_ordering(9) 00:14:04.333 fused_ordering(10) 00:14:04.333 fused_ordering(11) 00:14:04.333 fused_ordering(12) 00:14:04.333 fused_ordering(13) 00:14:04.333 fused_ordering(14) 00:14:04.333 fused_ordering(15) 00:14:04.333 fused_ordering(16) 00:14:04.333 fused_ordering(17) 00:14:04.333 fused_ordering(18) 00:14:04.333 fused_ordering(19) 00:14:04.333 fused_ordering(20) 00:14:04.333 fused_ordering(21) 00:14:04.333 fused_ordering(22) 00:14:04.333 fused_ordering(23) 00:14:04.333 fused_ordering(24) 00:14:04.333 fused_ordering(25) 00:14:04.333 fused_ordering(26) 00:14:04.333 fused_ordering(27) 00:14:04.333 fused_ordering(28) 00:14:04.333 fused_ordering(29) 00:14:04.333 fused_ordering(30) 00:14:04.333 fused_ordering(31) 00:14:04.333 fused_ordering(32) 00:14:04.333 fused_ordering(33) 00:14:04.333 fused_ordering(34) 00:14:04.333 fused_ordering(35) 00:14:04.333 fused_ordering(36) 00:14:04.333 fused_ordering(37) 00:14:04.333 fused_ordering(38) 00:14:04.333 fused_ordering(39) 00:14:04.333 fused_ordering(40) 00:14:04.333 fused_ordering(41) 00:14:04.333 fused_ordering(42) 00:14:04.333 fused_ordering(43) 00:14:04.333 fused_ordering(44) 00:14:04.333 fused_ordering(45) 00:14:04.333 fused_ordering(46) 00:14:04.333 fused_ordering(47) 00:14:04.333 fused_ordering(48) 00:14:04.333 fused_ordering(49) 00:14:04.333 fused_ordering(50) 00:14:04.333 fused_ordering(51) 00:14:04.333 fused_ordering(52) 00:14:04.333 fused_ordering(53) 00:14:04.333 fused_ordering(54) 00:14:04.333 fused_ordering(55) 00:14:04.333 fused_ordering(56) 00:14:04.333 fused_ordering(57) 00:14:04.333 fused_ordering(58) 00:14:04.333 fused_ordering(59) 00:14:04.333 fused_ordering(60) 00:14:04.333 fused_ordering(61) 00:14:04.333 fused_ordering(62) 00:14:04.333 fused_ordering(63) 00:14:04.333 fused_ordering(64) 00:14:04.333 fused_ordering(65) 00:14:04.333 fused_ordering(66) 00:14:04.333 fused_ordering(67) 00:14:04.333 fused_ordering(68) 00:14:04.333 fused_ordering(69) 00:14:04.333 fused_ordering(70) 00:14:04.333 fused_ordering(71) 00:14:04.333 fused_ordering(72) 00:14:04.333 fused_ordering(73) 00:14:04.333 fused_ordering(74) 00:14:04.334 fused_ordering(75) 00:14:04.334 fused_ordering(76) 00:14:04.334 fused_ordering(77) 00:14:04.334 fused_ordering(78) 00:14:04.334 fused_ordering(79) 00:14:04.334 fused_ordering(80) 00:14:04.334 fused_ordering(81) 00:14:04.334 fused_ordering(82) 00:14:04.334 fused_ordering(83) 00:14:04.334 fused_ordering(84) 00:14:04.334 fused_ordering(85) 00:14:04.334 fused_ordering(86) 00:14:04.334 fused_ordering(87) 00:14:04.334 fused_ordering(88) 00:14:04.334 fused_ordering(89) 00:14:04.334 fused_ordering(90) 00:14:04.334 fused_ordering(91) 00:14:04.334 fused_ordering(92) 00:14:04.334 fused_ordering(93) 00:14:04.334 fused_ordering(94) 00:14:04.334 fused_ordering(95) 00:14:04.334 fused_ordering(96) 00:14:04.334 fused_ordering(97) 00:14:04.334 fused_ordering(98) 00:14:04.334 fused_ordering(99) 00:14:04.334 fused_ordering(100) 00:14:04.334 fused_ordering(101) 00:14:04.334 fused_ordering(102) 00:14:04.334 fused_ordering(103) 00:14:04.334 fused_ordering(104) 00:14:04.334 fused_ordering(105) 00:14:04.334 fused_ordering(106) 00:14:04.334 fused_ordering(107) 00:14:04.334 fused_ordering(108) 00:14:04.334 fused_ordering(109) 00:14:04.334 fused_ordering(110) 00:14:04.334 fused_ordering(111) 00:14:04.334 fused_ordering(112) 00:14:04.334 fused_ordering(113) 00:14:04.334 fused_ordering(114) 00:14:04.334 fused_ordering(115) 00:14:04.334 fused_ordering(116) 00:14:04.334 fused_ordering(117) 00:14:04.334 fused_ordering(118) 00:14:04.334 fused_ordering(119) 00:14:04.334 fused_ordering(120) 00:14:04.334 fused_ordering(121) 00:14:04.334 fused_ordering(122) 00:14:04.334 fused_ordering(123) 00:14:04.334 fused_ordering(124) 00:14:04.334 fused_ordering(125) 00:14:04.334 fused_ordering(126) 00:14:04.334 fused_ordering(127) 00:14:04.334 fused_ordering(128) 00:14:04.334 fused_ordering(129) 00:14:04.334 fused_ordering(130) 00:14:04.334 fused_ordering(131) 00:14:04.334 fused_ordering(132) 00:14:04.334 fused_ordering(133) 00:14:04.334 fused_ordering(134) 00:14:04.334 fused_ordering(135) 00:14:04.334 fused_ordering(136) 00:14:04.334 fused_ordering(137) 00:14:04.334 fused_ordering(138) 00:14:04.334 fused_ordering(139) 00:14:04.334 fused_ordering(140) 00:14:04.334 fused_ordering(141) 00:14:04.334 fused_ordering(142) 00:14:04.334 fused_ordering(143) 00:14:04.334 fused_ordering(144) 00:14:04.334 fused_ordering(145) 00:14:04.334 fused_ordering(146) 00:14:04.334 fused_ordering(147) 00:14:04.334 fused_ordering(148) 00:14:04.334 fused_ordering(149) 00:14:04.334 fused_ordering(150) 00:14:04.334 fused_ordering(151) 00:14:04.334 fused_ordering(152) 00:14:04.334 fused_ordering(153) 00:14:04.334 fused_ordering(154) 00:14:04.334 fused_ordering(155) 00:14:04.334 fused_ordering(156) 00:14:04.334 fused_ordering(157) 00:14:04.334 fused_ordering(158) 00:14:04.334 fused_ordering(159) 00:14:04.334 fused_ordering(160) 00:14:04.334 fused_ordering(161) 00:14:04.334 fused_ordering(162) 00:14:04.334 fused_ordering(163) 00:14:04.334 fused_ordering(164) 00:14:04.334 fused_ordering(165) 00:14:04.334 fused_ordering(166) 00:14:04.334 fused_ordering(167) 00:14:04.334 fused_ordering(168) 00:14:04.334 fused_ordering(169) 00:14:04.334 fused_ordering(170) 00:14:04.334 fused_ordering(171) 00:14:04.334 fused_ordering(172) 00:14:04.334 fused_ordering(173) 00:14:04.334 fused_ordering(174) 00:14:04.334 fused_ordering(175) 00:14:04.334 fused_ordering(176) 00:14:04.334 fused_ordering(177) 00:14:04.334 fused_ordering(178) 00:14:04.334 fused_ordering(179) 00:14:04.334 fused_ordering(180) 00:14:04.334 fused_ordering(181) 00:14:04.334 fused_ordering(182) 00:14:04.334 fused_ordering(183) 00:14:04.334 fused_ordering(184) 00:14:04.334 fused_ordering(185) 00:14:04.334 fused_ordering(186) 00:14:04.334 fused_ordering(187) 00:14:04.334 fused_ordering(188) 00:14:04.334 fused_ordering(189) 00:14:04.334 fused_ordering(190) 00:14:04.334 fused_ordering(191) 00:14:04.334 fused_ordering(192) 00:14:04.334 fused_ordering(193) 00:14:04.334 fused_ordering(194) 00:14:04.334 fused_ordering(195) 00:14:04.334 fused_ordering(196) 00:14:04.334 fused_ordering(197) 00:14:04.334 fused_ordering(198) 00:14:04.334 fused_ordering(199) 00:14:04.334 fused_ordering(200) 00:14:04.334 fused_ordering(201) 00:14:04.334 fused_ordering(202) 00:14:04.334 fused_ordering(203) 00:14:04.334 fused_ordering(204) 00:14:04.334 fused_ordering(205) 00:14:04.594 fused_ordering(206) 00:14:04.594 fused_ordering(207) 00:14:04.594 fused_ordering(208) 00:14:04.594 fused_ordering(209) 00:14:04.594 fused_ordering(210) 00:14:04.594 fused_ordering(211) 00:14:04.594 fused_ordering(212) 00:14:04.594 fused_ordering(213) 00:14:04.594 fused_ordering(214) 00:14:04.594 fused_ordering(215) 00:14:04.594 fused_ordering(216) 00:14:04.594 fused_ordering(217) 00:14:04.594 fused_ordering(218) 00:14:04.594 fused_ordering(219) 00:14:04.594 fused_ordering(220) 00:14:04.594 fused_ordering(221) 00:14:04.594 fused_ordering(222) 00:14:04.594 fused_ordering(223) 00:14:04.594 fused_ordering(224) 00:14:04.594 fused_ordering(225) 00:14:04.594 fused_ordering(226) 00:14:04.594 fused_ordering(227) 00:14:04.594 fused_ordering(228) 00:14:04.594 fused_ordering(229) 00:14:04.594 fused_ordering(230) 00:14:04.594 fused_ordering(231) 00:14:04.594 fused_ordering(232) 00:14:04.594 fused_ordering(233) 00:14:04.594 fused_ordering(234) 00:14:04.594 fused_ordering(235) 00:14:04.594 fused_ordering(236) 00:14:04.594 fused_ordering(237) 00:14:04.594 fused_ordering(238) 00:14:04.594 fused_ordering(239) 00:14:04.594 fused_ordering(240) 00:14:04.594 fused_ordering(241) 00:14:04.594 fused_ordering(242) 00:14:04.594 fused_ordering(243) 00:14:04.594 fused_ordering(244) 00:14:04.594 fused_ordering(245) 00:14:04.594 fused_ordering(246) 00:14:04.594 fused_ordering(247) 00:14:04.594 fused_ordering(248) 00:14:04.594 fused_ordering(249) 00:14:04.594 fused_ordering(250) 00:14:04.594 fused_ordering(251) 00:14:04.594 fused_ordering(252) 00:14:04.594 fused_ordering(253) 00:14:04.594 fused_ordering(254) 00:14:04.594 fused_ordering(255) 00:14:04.594 fused_ordering(256) 00:14:04.594 fused_ordering(257) 00:14:04.594 fused_ordering(258) 00:14:04.594 fused_ordering(259) 00:14:04.594 fused_ordering(260) 00:14:04.594 fused_ordering(261) 00:14:04.594 fused_ordering(262) 00:14:04.594 fused_ordering(263) 00:14:04.594 fused_ordering(264) 00:14:04.594 fused_ordering(265) 00:14:04.594 fused_ordering(266) 00:14:04.594 fused_ordering(267) 00:14:04.594 fused_ordering(268) 00:14:04.594 fused_ordering(269) 00:14:04.594 fused_ordering(270) 00:14:04.594 fused_ordering(271) 00:14:04.594 fused_ordering(272) 00:14:04.594 fused_ordering(273) 00:14:04.594 fused_ordering(274) 00:14:04.594 fused_ordering(275) 00:14:04.594 fused_ordering(276) 00:14:04.594 fused_ordering(277) 00:14:04.594 fused_ordering(278) 00:14:04.594 fused_ordering(279) 00:14:04.594 fused_ordering(280) 00:14:04.594 fused_ordering(281) 00:14:04.594 fused_ordering(282) 00:14:04.594 fused_ordering(283) 00:14:04.594 fused_ordering(284) 00:14:04.594 fused_ordering(285) 00:14:04.594 fused_ordering(286) 00:14:04.594 fused_ordering(287) 00:14:04.594 fused_ordering(288) 00:14:04.594 fused_ordering(289) 00:14:04.594 fused_ordering(290) 00:14:04.594 fused_ordering(291) 00:14:04.594 fused_ordering(292) 00:14:04.594 fused_ordering(293) 00:14:04.594 fused_ordering(294) 00:14:04.594 fused_ordering(295) 00:14:04.594 fused_ordering(296) 00:14:04.594 fused_ordering(297) 00:14:04.594 fused_ordering(298) 00:14:04.594 fused_ordering(299) 00:14:04.594 fused_ordering(300) 00:14:04.594 fused_ordering(301) 00:14:04.594 fused_ordering(302) 00:14:04.594 fused_ordering(303) 00:14:04.594 fused_ordering(304) 00:14:04.594 fused_ordering(305) 00:14:04.594 fused_ordering(306) 00:14:04.594 fused_ordering(307) 00:14:04.594 fused_ordering(308) 00:14:04.594 fused_ordering(309) 00:14:04.594 fused_ordering(310) 00:14:04.594 fused_ordering(311) 00:14:04.594 fused_ordering(312) 00:14:04.594 fused_ordering(313) 00:14:04.594 fused_ordering(314) 00:14:04.594 fused_ordering(315) 00:14:04.594 fused_ordering(316) 00:14:04.594 fused_ordering(317) 00:14:04.594 fused_ordering(318) 00:14:04.594 fused_ordering(319) 00:14:04.594 fused_ordering(320) 00:14:04.594 fused_ordering(321) 00:14:04.594 fused_ordering(322) 00:14:04.594 fused_ordering(323) 00:14:04.594 fused_ordering(324) 00:14:04.594 fused_ordering(325) 00:14:04.594 fused_ordering(326) 00:14:04.594 fused_ordering(327) 00:14:04.594 fused_ordering(328) 00:14:04.594 fused_ordering(329) 00:14:04.594 fused_ordering(330) 00:14:04.594 fused_ordering(331) 00:14:04.594 fused_ordering(332) 00:14:04.594 fused_ordering(333) 00:14:04.594 fused_ordering(334) 00:14:04.594 fused_ordering(335) 00:14:04.594 fused_ordering(336) 00:14:04.594 fused_ordering(337) 00:14:04.594 fused_ordering(338) 00:14:04.594 fused_ordering(339) 00:14:04.594 fused_ordering(340) 00:14:04.594 fused_ordering(341) 00:14:04.594 fused_ordering(342) 00:14:04.594 fused_ordering(343) 00:14:04.594 fused_ordering(344) 00:14:04.594 fused_ordering(345) 00:14:04.594 fused_ordering(346) 00:14:04.594 fused_ordering(347) 00:14:04.594 fused_ordering(348) 00:14:04.594 fused_ordering(349) 00:14:04.594 fused_ordering(350) 00:14:04.594 fused_ordering(351) 00:14:04.594 fused_ordering(352) 00:14:04.594 fused_ordering(353) 00:14:04.594 fused_ordering(354) 00:14:04.594 fused_ordering(355) 00:14:04.594 fused_ordering(356) 00:14:04.594 fused_ordering(357) 00:14:04.594 fused_ordering(358) 00:14:04.594 fused_ordering(359) 00:14:04.594 fused_ordering(360) 00:14:04.594 fused_ordering(361) 00:14:04.594 fused_ordering(362) 00:14:04.594 fused_ordering(363) 00:14:04.594 fused_ordering(364) 00:14:04.594 fused_ordering(365) 00:14:04.594 fused_ordering(366) 00:14:04.594 fused_ordering(367) 00:14:04.594 fused_ordering(368) 00:14:04.594 fused_ordering(369) 00:14:04.594 fused_ordering(370) 00:14:04.594 fused_ordering(371) 00:14:04.594 fused_ordering(372) 00:14:04.594 fused_ordering(373) 00:14:04.594 fused_ordering(374) 00:14:04.594 fused_ordering(375) 00:14:04.594 fused_ordering(376) 00:14:04.594 fused_ordering(377) 00:14:04.594 fused_ordering(378) 00:14:04.594 fused_ordering(379) 00:14:04.594 fused_ordering(380) 00:14:04.594 fused_ordering(381) 00:14:04.594 fused_ordering(382) 00:14:04.594 fused_ordering(383) 00:14:04.594 fused_ordering(384) 00:14:04.594 fused_ordering(385) 00:14:04.594 fused_ordering(386) 00:14:04.594 fused_ordering(387) 00:14:04.594 fused_ordering(388) 00:14:04.594 fused_ordering(389) 00:14:04.594 fused_ordering(390) 00:14:04.594 fused_ordering(391) 00:14:04.594 fused_ordering(392) 00:14:04.594 fused_ordering(393) 00:14:04.594 fused_ordering(394) 00:14:04.594 fused_ordering(395) 00:14:04.594 fused_ordering(396) 00:14:04.594 fused_ordering(397) 00:14:04.594 fused_ordering(398) 00:14:04.594 fused_ordering(399) 00:14:04.594 fused_ordering(400) 00:14:04.594 fused_ordering(401) 00:14:04.594 fused_ordering(402) 00:14:04.594 fused_ordering(403) 00:14:04.594 fused_ordering(404) 00:14:04.594 fused_ordering(405) 00:14:04.594 fused_ordering(406) 00:14:04.594 fused_ordering(407) 00:14:04.594 fused_ordering(408) 00:14:04.594 fused_ordering(409) 00:14:04.594 fused_ordering(410) 00:14:05.163 fused_ordering(411) 00:14:05.163 fused_ordering(412) 00:14:05.163 fused_ordering(413) 00:14:05.163 fused_ordering(414) 00:14:05.163 fused_ordering(415) 00:14:05.163 fused_ordering(416) 00:14:05.163 fused_ordering(417) 00:14:05.163 fused_ordering(418) 00:14:05.163 fused_ordering(419) 00:14:05.163 fused_ordering(420) 00:14:05.163 fused_ordering(421) 00:14:05.163 fused_ordering(422) 00:14:05.163 fused_ordering(423) 00:14:05.163 fused_ordering(424) 00:14:05.163 fused_ordering(425) 00:14:05.163 fused_ordering(426) 00:14:05.163 fused_ordering(427) 00:14:05.163 fused_ordering(428) 00:14:05.163 fused_ordering(429) 00:14:05.163 fused_ordering(430) 00:14:05.163 fused_ordering(431) 00:14:05.163 fused_ordering(432) 00:14:05.163 fused_ordering(433) 00:14:05.163 fused_ordering(434) 00:14:05.163 fused_ordering(435) 00:14:05.163 fused_ordering(436) 00:14:05.163 fused_ordering(437) 00:14:05.163 fused_ordering(438) 00:14:05.163 fused_ordering(439) 00:14:05.163 fused_ordering(440) 00:14:05.163 fused_ordering(441) 00:14:05.163 fused_ordering(442) 00:14:05.163 fused_ordering(443) 00:14:05.163 fused_ordering(444) 00:14:05.163 fused_ordering(445) 00:14:05.163 fused_ordering(446) 00:14:05.163 fused_ordering(447) 00:14:05.163 fused_ordering(448) 00:14:05.163 fused_ordering(449) 00:14:05.163 fused_ordering(450) 00:14:05.163 fused_ordering(451) 00:14:05.163 fused_ordering(452) 00:14:05.163 fused_ordering(453) 00:14:05.163 fused_ordering(454) 00:14:05.163 fused_ordering(455) 00:14:05.163 fused_ordering(456) 00:14:05.163 fused_ordering(457) 00:14:05.163 fused_ordering(458) 00:14:05.163 fused_ordering(459) 00:14:05.163 fused_ordering(460) 00:14:05.163 fused_ordering(461) 00:14:05.163 fused_ordering(462) 00:14:05.163 fused_ordering(463) 00:14:05.163 fused_ordering(464) 00:14:05.163 fused_ordering(465) 00:14:05.163 fused_ordering(466) 00:14:05.163 fused_ordering(467) 00:14:05.163 fused_ordering(468) 00:14:05.163 fused_ordering(469) 00:14:05.163 fused_ordering(470) 00:14:05.163 fused_ordering(471) 00:14:05.163 fused_ordering(472) 00:14:05.163 fused_ordering(473) 00:14:05.163 fused_ordering(474) 00:14:05.163 fused_ordering(475) 00:14:05.163 fused_ordering(476) 00:14:05.163 fused_ordering(477) 00:14:05.163 fused_ordering(478) 00:14:05.163 fused_ordering(479) 00:14:05.163 fused_ordering(480) 00:14:05.163 fused_ordering(481) 00:14:05.163 fused_ordering(482) 00:14:05.163 fused_ordering(483) 00:14:05.163 fused_ordering(484) 00:14:05.163 fused_ordering(485) 00:14:05.163 fused_ordering(486) 00:14:05.163 fused_ordering(487) 00:14:05.163 fused_ordering(488) 00:14:05.163 fused_ordering(489) 00:14:05.163 fused_ordering(490) 00:14:05.163 fused_ordering(491) 00:14:05.163 fused_ordering(492) 00:14:05.163 fused_ordering(493) 00:14:05.163 fused_ordering(494) 00:14:05.163 fused_ordering(495) 00:14:05.163 fused_ordering(496) 00:14:05.163 fused_ordering(497) 00:14:05.163 fused_ordering(498) 00:14:05.163 fused_ordering(499) 00:14:05.163 fused_ordering(500) 00:14:05.163 fused_ordering(501) 00:14:05.163 fused_ordering(502) 00:14:05.163 fused_ordering(503) 00:14:05.163 fused_ordering(504) 00:14:05.163 fused_ordering(505) 00:14:05.163 fused_ordering(506) 00:14:05.163 fused_ordering(507) 00:14:05.163 fused_ordering(508) 00:14:05.163 fused_ordering(509) 00:14:05.163 fused_ordering(510) 00:14:05.163 fused_ordering(511) 00:14:05.163 fused_ordering(512) 00:14:05.163 fused_ordering(513) 00:14:05.163 fused_ordering(514) 00:14:05.163 fused_ordering(515) 00:14:05.163 fused_ordering(516) 00:14:05.163 fused_ordering(517) 00:14:05.163 fused_ordering(518) 00:14:05.163 fused_ordering(519) 00:14:05.163 fused_ordering(520) 00:14:05.163 fused_ordering(521) 00:14:05.163 fused_ordering(522) 00:14:05.163 fused_ordering(523) 00:14:05.163 fused_ordering(524) 00:14:05.163 fused_ordering(525) 00:14:05.163 fused_ordering(526) 00:14:05.164 fused_ordering(527) 00:14:05.164 fused_ordering(528) 00:14:05.164 fused_ordering(529) 00:14:05.164 fused_ordering(530) 00:14:05.164 fused_ordering(531) 00:14:05.164 fused_ordering(532) 00:14:05.164 fused_ordering(533) 00:14:05.164 fused_ordering(534) 00:14:05.164 fused_ordering(535) 00:14:05.164 fused_ordering(536) 00:14:05.164 fused_ordering(537) 00:14:05.164 fused_ordering(538) 00:14:05.164 fused_ordering(539) 00:14:05.164 fused_ordering(540) 00:14:05.164 fused_ordering(541) 00:14:05.164 fused_ordering(542) 00:14:05.164 fused_ordering(543) 00:14:05.164 fused_ordering(544) 00:14:05.164 fused_ordering(545) 00:14:05.164 fused_ordering(546) 00:14:05.164 fused_ordering(547) 00:14:05.164 fused_ordering(548) 00:14:05.164 fused_ordering(549) 00:14:05.164 fused_ordering(550) 00:14:05.164 fused_ordering(551) 00:14:05.164 fused_ordering(552) 00:14:05.164 fused_ordering(553) 00:14:05.164 fused_ordering(554) 00:14:05.164 fused_ordering(555) 00:14:05.164 fused_ordering(556) 00:14:05.164 fused_ordering(557) 00:14:05.164 fused_ordering(558) 00:14:05.164 fused_ordering(559) 00:14:05.164 fused_ordering(560) 00:14:05.164 fused_ordering(561) 00:14:05.164 fused_ordering(562) 00:14:05.164 fused_ordering(563) 00:14:05.164 fused_ordering(564) 00:14:05.164 fused_ordering(565) 00:14:05.164 fused_ordering(566) 00:14:05.164 fused_ordering(567) 00:14:05.164 fused_ordering(568) 00:14:05.164 fused_ordering(569) 00:14:05.164 fused_ordering(570) 00:14:05.164 fused_ordering(571) 00:14:05.164 fused_ordering(572) 00:14:05.164 fused_ordering(573) 00:14:05.164 fused_ordering(574) 00:14:05.164 fused_ordering(575) 00:14:05.164 fused_ordering(576) 00:14:05.164 fused_ordering(577) 00:14:05.164 fused_ordering(578) 00:14:05.164 fused_ordering(579) 00:14:05.164 fused_ordering(580) 00:14:05.164 fused_ordering(581) 00:14:05.164 fused_ordering(582) 00:14:05.164 fused_ordering(583) 00:14:05.164 fused_ordering(584) 00:14:05.164 fused_ordering(585) 00:14:05.164 fused_ordering(586) 00:14:05.164 fused_ordering(587) 00:14:05.164 fused_ordering(588) 00:14:05.164 fused_ordering(589) 00:14:05.164 fused_ordering(590) 00:14:05.164 fused_ordering(591) 00:14:05.164 fused_ordering(592) 00:14:05.164 fused_ordering(593) 00:14:05.164 fused_ordering(594) 00:14:05.164 fused_ordering(595) 00:14:05.164 fused_ordering(596) 00:14:05.164 fused_ordering(597) 00:14:05.164 fused_ordering(598) 00:14:05.164 fused_ordering(599) 00:14:05.164 fused_ordering(600) 00:14:05.164 fused_ordering(601) 00:14:05.164 fused_ordering(602) 00:14:05.164 fused_ordering(603) 00:14:05.164 fused_ordering(604) 00:14:05.164 fused_ordering(605) 00:14:05.164 fused_ordering(606) 00:14:05.164 fused_ordering(607) 00:14:05.164 fused_ordering(608) 00:14:05.164 fused_ordering(609) 00:14:05.164 fused_ordering(610) 00:14:05.164 fused_ordering(611) 00:14:05.164 fused_ordering(612) 00:14:05.164 fused_ordering(613) 00:14:05.164 fused_ordering(614) 00:14:05.164 fused_ordering(615) 00:14:05.733 fused_ordering(616) 00:14:05.733 fused_ordering(617) 00:14:05.733 fused_ordering(618) 00:14:05.733 fused_ordering(619) 00:14:05.733 fused_ordering(620) 00:14:05.733 fused_ordering(621) 00:14:05.733 fused_ordering(622) 00:14:05.733 fused_ordering(623) 00:14:05.733 fused_ordering(624) 00:14:05.733 fused_ordering(625) 00:14:05.733 fused_ordering(626) 00:14:05.733 fused_ordering(627) 00:14:05.733 fused_ordering(628) 00:14:05.733 fused_ordering(629) 00:14:05.733 fused_ordering(630) 00:14:05.733 fused_ordering(631) 00:14:05.733 fused_ordering(632) 00:14:05.733 fused_ordering(633) 00:14:05.733 fused_ordering(634) 00:14:05.733 fused_ordering(635) 00:14:05.733 fused_ordering(636) 00:14:05.733 fused_ordering(637) 00:14:05.733 fused_ordering(638) 00:14:05.733 fused_ordering(639) 00:14:05.733 fused_ordering(640) 00:14:05.733 fused_ordering(641) 00:14:05.733 fused_ordering(642) 00:14:05.733 fused_ordering(643) 00:14:05.733 fused_ordering(644) 00:14:05.733 fused_ordering(645) 00:14:05.733 fused_ordering(646) 00:14:05.733 fused_ordering(647) 00:14:05.733 fused_ordering(648) 00:14:05.733 fused_ordering(649) 00:14:05.733 fused_ordering(650) 00:14:05.733 fused_ordering(651) 00:14:05.733 fused_ordering(652) 00:14:05.733 fused_ordering(653) 00:14:05.733 fused_ordering(654) 00:14:05.733 fused_ordering(655) 00:14:05.733 fused_ordering(656) 00:14:05.733 fused_ordering(657) 00:14:05.733 fused_ordering(658) 00:14:05.733 fused_ordering(659) 00:14:05.733 fused_ordering(660) 00:14:05.733 fused_ordering(661) 00:14:05.733 fused_ordering(662) 00:14:05.733 fused_ordering(663) 00:14:05.733 fused_ordering(664) 00:14:05.733 fused_ordering(665) 00:14:05.733 fused_ordering(666) 00:14:05.733 fused_ordering(667) 00:14:05.733 fused_ordering(668) 00:14:05.733 fused_ordering(669) 00:14:05.733 fused_ordering(670) 00:14:05.733 fused_ordering(671) 00:14:05.733 fused_ordering(672) 00:14:05.733 fused_ordering(673) 00:14:05.733 fused_ordering(674) 00:14:05.733 fused_ordering(675) 00:14:05.733 fused_ordering(676) 00:14:05.733 fused_ordering(677) 00:14:05.733 fused_ordering(678) 00:14:05.733 fused_ordering(679) 00:14:05.733 fused_ordering(680) 00:14:05.733 fused_ordering(681) 00:14:05.733 fused_ordering(682) 00:14:05.733 fused_ordering(683) 00:14:05.733 fused_ordering(684) 00:14:05.733 fused_ordering(685) 00:14:05.733 fused_ordering(686) 00:14:05.733 fused_ordering(687) 00:14:05.733 fused_ordering(688) 00:14:05.733 fused_ordering(689) 00:14:05.733 fused_ordering(690) 00:14:05.733 fused_ordering(691) 00:14:05.733 fused_ordering(692) 00:14:05.733 fused_ordering(693) 00:14:05.733 fused_ordering(694) 00:14:05.733 fused_ordering(695) 00:14:05.733 fused_ordering(696) 00:14:05.733 fused_ordering(697) 00:14:05.733 fused_ordering(698) 00:14:05.733 fused_ordering(699) 00:14:05.733 fused_ordering(700) 00:14:05.733 fused_ordering(701) 00:14:05.733 fused_ordering(702) 00:14:05.733 fused_ordering(703) 00:14:05.733 fused_ordering(704) 00:14:05.733 fused_ordering(705) 00:14:05.733 fused_ordering(706) 00:14:05.733 fused_ordering(707) 00:14:05.733 fused_ordering(708) 00:14:05.733 fused_ordering(709) 00:14:05.733 fused_ordering(710) 00:14:05.733 fused_ordering(711) 00:14:05.733 fused_ordering(712) 00:14:05.733 fused_ordering(713) 00:14:05.733 fused_ordering(714) 00:14:05.733 fused_ordering(715) 00:14:05.733 fused_ordering(716) 00:14:05.733 fused_ordering(717) 00:14:05.733 fused_ordering(718) 00:14:05.733 fused_ordering(719) 00:14:05.733 fused_ordering(720) 00:14:05.733 fused_ordering(721) 00:14:05.733 fused_ordering(722) 00:14:05.733 fused_ordering(723) 00:14:05.733 fused_ordering(724) 00:14:05.733 fused_ordering(725) 00:14:05.733 fused_ordering(726) 00:14:05.733 fused_ordering(727) 00:14:05.733 fused_ordering(728) 00:14:05.733 fused_ordering(729) 00:14:05.733 fused_ordering(730) 00:14:05.733 fused_ordering(731) 00:14:05.733 fused_ordering(732) 00:14:05.733 fused_ordering(733) 00:14:05.733 fused_ordering(734) 00:14:05.733 fused_ordering(735) 00:14:05.733 fused_ordering(736) 00:14:05.733 fused_ordering(737) 00:14:05.733 fused_ordering(738) 00:14:05.733 fused_ordering(739) 00:14:05.733 fused_ordering(740) 00:14:05.733 fused_ordering(741) 00:14:05.733 fused_ordering(742) 00:14:05.733 fused_ordering(743) 00:14:05.733 fused_ordering(744) 00:14:05.733 fused_ordering(745) 00:14:05.733 fused_ordering(746) 00:14:05.733 fused_ordering(747) 00:14:05.733 fused_ordering(748) 00:14:05.733 fused_ordering(749) 00:14:05.733 fused_ordering(750) 00:14:05.734 fused_ordering(751) 00:14:05.734 fused_ordering(752) 00:14:05.734 fused_ordering(753) 00:14:05.734 fused_ordering(754) 00:14:05.734 fused_ordering(755) 00:14:05.734 fused_ordering(756) 00:14:05.734 fused_ordering(757) 00:14:05.734 fused_ordering(758) 00:14:05.734 fused_ordering(759) 00:14:05.734 fused_ordering(760) 00:14:05.734 fused_ordering(761) 00:14:05.734 fused_ordering(762) 00:14:05.734 fused_ordering(763) 00:14:05.734 fused_ordering(764) 00:14:05.734 fused_ordering(765) 00:14:05.734 fused_ordering(766) 00:14:05.734 fused_ordering(767) 00:14:05.734 fused_ordering(768) 00:14:05.734 fused_ordering(769) 00:14:05.734 fused_ordering(770) 00:14:05.734 fused_ordering(771) 00:14:05.734 fused_ordering(772) 00:14:05.734 fused_ordering(773) 00:14:05.734 fused_ordering(774) 00:14:05.734 fused_ordering(775) 00:14:05.734 fused_ordering(776) 00:14:05.734 fused_ordering(777) 00:14:05.734 fused_ordering(778) 00:14:05.734 fused_ordering(779) 00:14:05.734 fused_ordering(780) 00:14:05.734 fused_ordering(781) 00:14:05.734 fused_ordering(782) 00:14:05.734 fused_ordering(783) 00:14:05.734 fused_ordering(784) 00:14:05.734 fused_ordering(785) 00:14:05.734 fused_ordering(786) 00:14:05.734 fused_ordering(787) 00:14:05.734 fused_ordering(788) 00:14:05.734 fused_ordering(789) 00:14:05.734 fused_ordering(790) 00:14:05.734 fused_ordering(791) 00:14:05.734 fused_ordering(792) 00:14:05.734 fused_ordering(793) 00:14:05.734 fused_ordering(794) 00:14:05.734 fused_ordering(795) 00:14:05.734 fused_ordering(796) 00:14:05.734 fused_ordering(797) 00:14:05.734 fused_ordering(798) 00:14:05.734 fused_ordering(799) 00:14:05.734 fused_ordering(800) 00:14:05.734 fused_ordering(801) 00:14:05.734 fused_ordering(802) 00:14:05.734 fused_ordering(803) 00:14:05.734 fused_ordering(804) 00:14:05.734 fused_ordering(805) 00:14:05.734 fused_ordering(806) 00:14:05.734 fused_ordering(807) 00:14:05.734 fused_ordering(808) 00:14:05.734 fused_ordering(809) 00:14:05.734 fused_ordering(810) 00:14:05.734 fused_ordering(811) 00:14:05.734 fused_ordering(812) 00:14:05.734 fused_ordering(813) 00:14:05.734 fused_ordering(814) 00:14:05.734 fused_ordering(815) 00:14:05.734 fused_ordering(816) 00:14:05.734 fused_ordering(817) 00:14:05.734 fused_ordering(818) 00:14:05.734 fused_ordering(819) 00:14:05.734 fused_ordering(820) 00:14:06.303 fused_ordering(821) 00:14:06.303 fused_ordering(822) 00:14:06.303 fused_ordering(823) 00:14:06.303 fused_ordering(824) 00:14:06.303 fused_ordering(825) 00:14:06.303 fused_ordering(826) 00:14:06.303 fused_ordering(827) 00:14:06.303 fused_ordering(828) 00:14:06.303 fused_ordering(829) 00:14:06.303 fused_ordering(830) 00:14:06.303 fused_ordering(831) 00:14:06.303 fused_ordering(832) 00:14:06.303 fused_ordering(833) 00:14:06.303 fused_ordering(834) 00:14:06.303 fused_ordering(835) 00:14:06.303 fused_ordering(836) 00:14:06.303 fused_ordering(837) 00:14:06.303 fused_ordering(838) 00:14:06.303 fused_ordering(839) 00:14:06.303 fused_ordering(840) 00:14:06.303 fused_ordering(841) 00:14:06.303 fused_ordering(842) 00:14:06.303 fused_ordering(843) 00:14:06.303 fused_ordering(844) 00:14:06.303 fused_ordering(845) 00:14:06.303 fused_ordering(846) 00:14:06.303 fused_ordering(847) 00:14:06.303 fused_ordering(848) 00:14:06.303 fused_ordering(849) 00:14:06.303 fused_ordering(850) 00:14:06.303 fused_ordering(851) 00:14:06.303 fused_ordering(852) 00:14:06.303 fused_ordering(853) 00:14:06.303 fused_ordering(854) 00:14:06.303 fused_ordering(855) 00:14:06.303 fused_ordering(856) 00:14:06.303 fused_ordering(857) 00:14:06.303 fused_ordering(858) 00:14:06.303 fused_ordering(859) 00:14:06.303 fused_ordering(860) 00:14:06.303 fused_ordering(861) 00:14:06.303 fused_ordering(862) 00:14:06.303 fused_ordering(863) 00:14:06.303 fused_ordering(864) 00:14:06.303 fused_ordering(865) 00:14:06.303 fused_ordering(866) 00:14:06.303 fused_ordering(867) 00:14:06.303 fused_ordering(868) 00:14:06.303 fused_ordering(869) 00:14:06.303 fused_ordering(870) 00:14:06.303 fused_ordering(871) 00:14:06.303 fused_ordering(872) 00:14:06.303 fused_ordering(873) 00:14:06.303 fused_ordering(874) 00:14:06.303 fused_ordering(875) 00:14:06.303 fused_ordering(876) 00:14:06.303 fused_ordering(877) 00:14:06.303 fused_ordering(878) 00:14:06.303 fused_ordering(879) 00:14:06.303 fused_ordering(880) 00:14:06.303 fused_ordering(881) 00:14:06.303 fused_ordering(882) 00:14:06.303 fused_ordering(883) 00:14:06.303 fused_ordering(884) 00:14:06.303 fused_ordering(885) 00:14:06.303 fused_ordering(886) 00:14:06.303 fused_ordering(887) 00:14:06.303 fused_ordering(888) 00:14:06.303 fused_ordering(889) 00:14:06.303 fused_ordering(890) 00:14:06.303 fused_ordering(891) 00:14:06.303 fused_ordering(892) 00:14:06.303 fused_ordering(893) 00:14:06.303 fused_ordering(894) 00:14:06.303 fused_ordering(895) 00:14:06.303 fused_ordering(896) 00:14:06.303 fused_ordering(897) 00:14:06.303 fused_ordering(898) 00:14:06.303 fused_ordering(899) 00:14:06.303 fused_ordering(900) 00:14:06.303 fused_ordering(901) 00:14:06.303 fused_ordering(902) 00:14:06.303 fused_ordering(903) 00:14:06.303 fused_ordering(904) 00:14:06.303 fused_ordering(905) 00:14:06.303 fused_ordering(906) 00:14:06.303 fused_ordering(907) 00:14:06.303 fused_ordering(908) 00:14:06.303 fused_ordering(909) 00:14:06.303 fused_ordering(910) 00:14:06.303 fused_ordering(911) 00:14:06.303 fused_ordering(912) 00:14:06.303 fused_ordering(913) 00:14:06.303 fused_ordering(914) 00:14:06.303 fused_ordering(915) 00:14:06.303 fused_ordering(916) 00:14:06.303 fused_ordering(917) 00:14:06.303 fused_ordering(918) 00:14:06.303 fused_ordering(919) 00:14:06.303 fused_ordering(920) 00:14:06.303 fused_ordering(921) 00:14:06.303 fused_ordering(922) 00:14:06.303 fused_ordering(923) 00:14:06.303 fused_ordering(924) 00:14:06.303 fused_ordering(925) 00:14:06.303 fused_ordering(926) 00:14:06.303 fused_ordering(927) 00:14:06.303 fused_ordering(928) 00:14:06.303 fused_ordering(929) 00:14:06.303 fused_ordering(930) 00:14:06.303 fused_ordering(931) 00:14:06.303 fused_ordering(932) 00:14:06.303 fused_ordering(933) 00:14:06.303 fused_ordering(934) 00:14:06.303 fused_ordering(935) 00:14:06.303 fused_ordering(936) 00:14:06.303 fused_ordering(937) 00:14:06.303 fused_ordering(938) 00:14:06.303 fused_ordering(939) 00:14:06.303 fused_ordering(940) 00:14:06.303 fused_ordering(941) 00:14:06.303 fused_ordering(942) 00:14:06.303 fused_ordering(943) 00:14:06.303 fused_ordering(944) 00:14:06.303 fused_ordering(945) 00:14:06.303 fused_ordering(946) 00:14:06.303 fused_ordering(947) 00:14:06.303 fused_ordering(948) 00:14:06.303 fused_ordering(949) 00:14:06.303 fused_ordering(950) 00:14:06.303 fused_ordering(951) 00:14:06.303 fused_ordering(952) 00:14:06.303 fused_ordering(953) 00:14:06.303 fused_ordering(954) 00:14:06.303 fused_ordering(955) 00:14:06.303 fused_ordering(956) 00:14:06.303 fused_ordering(957) 00:14:06.303 fused_ordering(958) 00:14:06.303 fused_ordering(959) 00:14:06.303 fused_ordering(960) 00:14:06.303 fused_ordering(961) 00:14:06.303 fused_ordering(962) 00:14:06.303 fused_ordering(963) 00:14:06.303 fused_ordering(964) 00:14:06.303 fused_ordering(965) 00:14:06.303 fused_ordering(966) 00:14:06.303 fused_ordering(967) 00:14:06.303 fused_ordering(968) 00:14:06.303 fused_ordering(969) 00:14:06.303 fused_ordering(970) 00:14:06.303 fused_ordering(971) 00:14:06.303 fused_ordering(972) 00:14:06.303 fused_ordering(973) 00:14:06.303 fused_ordering(974) 00:14:06.303 fused_ordering(975) 00:14:06.303 fused_ordering(976) 00:14:06.303 fused_ordering(977) 00:14:06.303 fused_ordering(978) 00:14:06.303 fused_ordering(979) 00:14:06.303 fused_ordering(980) 00:14:06.303 fused_ordering(981) 00:14:06.303 fused_ordering(982) 00:14:06.303 fused_ordering(983) 00:14:06.303 fused_ordering(984) 00:14:06.303 fused_ordering(985) 00:14:06.303 fused_ordering(986) 00:14:06.303 fused_ordering(987) 00:14:06.303 fused_ordering(988) 00:14:06.303 fused_ordering(989) 00:14:06.303 fused_ordering(990) 00:14:06.304 fused_ordering(991) 00:14:06.304 fused_ordering(992) 00:14:06.304 fused_ordering(993) 00:14:06.304 fused_ordering(994) 00:14:06.304 fused_ordering(995) 00:14:06.304 fused_ordering(996) 00:14:06.304 fused_ordering(997) 00:14:06.304 fused_ordering(998) 00:14:06.304 fused_ordering(999) 00:14:06.304 fused_ordering(1000) 00:14:06.304 fused_ordering(1001) 00:14:06.304 fused_ordering(1002) 00:14:06.304 fused_ordering(1003) 00:14:06.304 fused_ordering(1004) 00:14:06.304 fused_ordering(1005) 00:14:06.304 fused_ordering(1006) 00:14:06.304 fused_ordering(1007) 00:14:06.304 fused_ordering(1008) 00:14:06.304 fused_ordering(1009) 00:14:06.304 fused_ordering(1010) 00:14:06.304 fused_ordering(1011) 00:14:06.304 fused_ordering(1012) 00:14:06.304 fused_ordering(1013) 00:14:06.304 fused_ordering(1014) 00:14:06.304 fused_ordering(1015) 00:14:06.304 fused_ordering(1016) 00:14:06.304 fused_ordering(1017) 00:14:06.304 fused_ordering(1018) 00:14:06.304 fused_ordering(1019) 00:14:06.304 fused_ordering(1020) 00:14:06.304 fused_ordering(1021) 00:14:06.304 fused_ordering(1022) 00:14:06.304 fused_ordering(1023) 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.304 rmmod nvme_tcp 00:14:06.304 rmmod nvme_fabrics 00:14:06.304 rmmod nvme_keyring 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1487285 ']' 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1487285 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1487285 ']' 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1487285 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1487285 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1487285' 00:14:06.304 killing process with pid 1487285 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1487285 00:14:06.304 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1487285 00:14:06.564 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.564 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:06.564 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:06.564 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.564 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:06.564 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.564 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.564 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.469 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:08.469 00:14:08.469 real 0m12.348s 00:14:08.469 user 0m6.140s 00:14:08.469 sys 0m6.973s 00:14:08.469 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.469 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.469 ************************************ 00:14:08.469 END TEST nvmf_fused_ordering 00:14:08.469 ************************************ 00:14:08.729 19:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:08.729 19:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:08.729 19:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.729 19:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.729 ************************************ 00:14:08.729 START TEST nvmf_ns_masking 00:14:08.729 ************************************ 00:14:08.729 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:08.729 * Looking for test storage... 00:14:08.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.729 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=bd72b376-922d-4a7c-b272-94b34ae6ccc2 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d0ed87f0-5094-4c80-840e-5a04daa1caf8 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=907c2438-4acc-4168-b409-fddfacd38733 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:08.730 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.302 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:15.303 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:15.303 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:15.303 Found net devices under 0000:af:00.0: cvl_0_0 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:15.303 Found net devices under 0000:af:00.1: cvl_0_1 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.303 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:15.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:14:15.563 00:14:15.563 --- 10.0.0.2 ping statistics --- 00:14:15.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.563 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:14:15.563 00:14:15.563 --- 10.0.0.1 ping statistics --- 00:14:15.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.563 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1491616 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1491616 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1491616 ']' 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.563 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.563 [2024-07-24 19:15:01.654930] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:15.563 [2024-07-24 19:15:01.654979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.563 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.563 [2024-07-24 19:15:01.725493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.563 [2024-07-24 19:15:01.799169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.563 [2024-07-24 19:15:01.799210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.563 [2024-07-24 19:15:01.799219] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.563 [2024-07-24 19:15:01.799228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.563 [2024-07-24 19:15:01.799235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.563 [2024-07-24 19:15:01.799257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.500 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.500 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:16.500 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.500 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:16.500 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.500 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.500 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:16.500 [2024-07-24 19:15:02.651064] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.500 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:16.500 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:16.500 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:16.759 Malloc1 00:14:16.759 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:17.017 Malloc2 00:14:17.017 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:17.017 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:17.276 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.533 [2024-07-24 19:15:03.558262] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.533 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:17.533 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 907c2438-4acc-4168-b409-fddfacd38733 -a 10.0.0.2 -s 4420 -i 4 00:14:17.533 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:17.533 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:17.533 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.533 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:17.533 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.137 [ 0]:0x1 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d937c2d3ccc4409ea7c33a6cc170db63 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d937c2d3ccc4409ea7c33a6cc170db63 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.137 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.137 [ 0]:0x1 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d937c2d3ccc4409ea7c33a6cc170db63 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d937c2d3ccc4409ea7c33a6cc170db63 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.137 [ 1]:0x2 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75920390f77f492da12b41beabf3d9fd 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75920390f77f492da12b41beabf3d9fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.137 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.396 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:20.396 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:20.396 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 907c2438-4acc-4168-b409-fddfacd38733 -a 10.0.0.2 -s 4420 -i 4 00:14:20.656 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:20.656 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:20.656 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.656 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:20.656 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:20.656 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.563 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.823 [ 0]:0x2 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75920390f77f492da12b41beabf3d9fd 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75920390f77f492da12b41beabf3d9fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.823 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.082 [ 0]:0x1 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d937c2d3ccc4409ea7c33a6cc170db63 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d937c2d3ccc4409ea7c33a6cc170db63 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:23.082 [ 1]:0x2 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75920390f77f492da12b41beabf3d9fd 00:14:23.082 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75920390f77f492da12b41beabf3d9fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.083 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.342 [ 0]:0x2 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75920390f77f492da12b41beabf3d9fd 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75920390f77f492da12b41beabf3d9fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.342 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.602 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:23.602 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 907c2438-4acc-4168-b409-fddfacd38733 -a 10.0.0.2 -s 4420 -i 4 00:14:23.860 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:23.860 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:23.860 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.860 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:23.860 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:23.860 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:25.766 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:25.766 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:25.766 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.766 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:25.766 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.766 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:25.766 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:25.766 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:26.025 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:26.025 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:26.025 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:26.025 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.025 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.025 [ 0]:0x1 00:14:26.025 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.025 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.025 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d937c2d3ccc4409ea7c33a6cc170db63 00:14:26.025 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d937c2d3ccc4409ea7c33a6cc170db63 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.025 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:26.025 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.025 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.025 [ 1]:0x2 00:14:26.026 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.026 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.026 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75920390f77f492da12b41beabf3d9fd 00:14:26.026 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75920390f77f492da12b41beabf3d9fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.026 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.291 [ 0]:0x2 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75920390f77f492da12b41beabf3d9fd 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75920390f77f492da12b41beabf3d9fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:26.291 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.551 [2024-07-24 19:15:12.564442] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:26.551 request: 00:14:26.551 { 00:14:26.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.551 "nsid": 2, 00:14:26.551 "host": "nqn.2016-06.io.spdk:host1", 00:14:26.551 "method": "nvmf_ns_remove_host", 00:14:26.551 "req_id": 1 00:14:26.551 } 00:14:26.551 Got JSON-RPC error response 00:14:26.551 response: 00:14:26.551 { 00:14:26.551 "code": -32602, 00:14:26.551 "message": "Invalid parameters" 00:14:26.551 } 00:14:26.551 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:26.551 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:26.551 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:26.551 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:26.551 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.552 [ 0]:0x2 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75920390f77f492da12b41beabf3d9fd 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75920390f77f492da12b41beabf3d9fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1494066 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1494066 /var/tmp/host.sock 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1494066 ']' 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:26.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.552 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.552 [2024-07-24 19:15:12.783235] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:26.552 [2024-07-24 19:15:12.783284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494066 ] 00:14:26.811 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.811 [2024-07-24 19:15:12.852889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.811 [2024-07-24 19:15:12.925682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.379 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:27.379 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:27.379 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.638 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.897 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid bd72b376-922d-4a7c-b272-94b34ae6ccc2 00:14:27.897 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:27.897 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BD72B376922D4A7CB27294B34AE6CCC2 -i 00:14:27.897 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d0ed87f0-5094-4c80-840e-5a04daa1caf8 00:14:27.897 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:27.897 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D0ED87F050944C80840E5A04DAA1CAF8 -i 00:14:28.157 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.416 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:28.416 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:28.416 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:28.676 nvme0n1 00:14:28.676 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:28.676 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:28.935 nvme1n2 00:14:28.935 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:28.935 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:28.935 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:28.935 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:28.935 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:29.194 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:29.194 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:29.194 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:29.194 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:29.194 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ bd72b376-922d-4a7c-b272-94b34ae6ccc2 == \b\d\7\2\b\3\7\6\-\9\2\2\d\-\4\a\7\c\-\b\2\7\2\-\9\4\b\3\4\a\e\6\c\c\c\2 ]] 00:14:29.194 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:29.194 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:29.194 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:29.453 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d0ed87f0-5094-4c80-840e-5a04daa1caf8 == \d\0\e\d\8\7\f\0\-\5\0\9\4\-\4\c\8\0\-\8\4\0\e\-\5\a\0\4\d\a\a\1\c\a\f\8 ]] 00:14:29.453 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1494066 00:14:29.453 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1494066 ']' 00:14:29.453 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1494066 00:14:29.453 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:29.453 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.453 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1494066 00:14:29.453 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:29.453 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:29.453 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1494066' 00:14:29.453 killing process with pid 1494066 00:14:29.453 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1494066 00:14:29.453 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1494066 00:14:30.022 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.022 rmmod nvme_tcp 00:14:30.022 rmmod nvme_fabrics 00:14:30.022 rmmod nvme_keyring 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1491616 ']' 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1491616 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1491616 ']' 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1491616 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1491616 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1491616' 00:14:30.022 killing process with pid 1491616 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1491616 00:14:30.022 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1491616 00:14:30.281 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.281 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.281 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.281 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.281 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.281 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.281 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.281 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:32.817 00:14:32.817 real 0m23.768s 00:14:32.817 user 0m23.503s 00:14:32.817 sys 0m8.023s 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:32.817 ************************************ 00:14:32.817 END TEST nvmf_ns_masking 00:14:32.817 ************************************ 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:32.817 ************************************ 00:14:32.817 START TEST nvmf_nvme_cli 00:14:32.817 ************************************ 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:32.817 * Looking for test storage... 00:14:32.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:32.817 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:32.818 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:32.818 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.818 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.818 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.818 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.818 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.818 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.818 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.818 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:32.818 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:32.818 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:32.818 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:39.451 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:39.451 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:39.451 Found net devices under 0000:af:00.0: cvl_0_0 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:39.451 Found net devices under 0000:af:00.1: cvl_0_1 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:39.451 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:39.452 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:39.452 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:39.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:14:39.452 00:14:39.452 --- 10.0.0.2 ping statistics --- 00:14:39.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.452 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:39.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:14:39.452 00:14:39.452 --- 10.0.0.1 ping statistics --- 00:14:39.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.452 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1498304 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1498304 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1498304 ']' 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.452 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.452 [2024-07-24 19:15:25.348439] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:39.452 [2024-07-24 19:15:25.348483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.452 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.452 [2024-07-24 19:15:25.422478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:39.452 [2024-07-24 19:15:25.497938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.452 [2024-07-24 19:15:25.497978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.452 [2024-07-24 19:15:25.497987] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.452 [2024-07-24 19:15:25.497996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.452 [2024-07-24 19:15:25.498004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.452 [2024-07-24 19:15:25.498093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.452 [2024-07-24 19:15:25.498188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.452 [2024-07-24 19:15:25.498270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.452 [2024-07-24 19:15:25.498272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.019 [2024-07-24 19:15:26.214097] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.019 Malloc0 00:14:40.019 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.020 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:40.020 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.020 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.279 Malloc1 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.279 [2024-07-24 19:15:26.294587] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:14:40.279 00:14:40.279 Discovery Log Number of Records 2, Generation counter 2 00:14:40.279 =====Discovery Log Entry 0====== 00:14:40.279 trtype: tcp 00:14:40.279 adrfam: ipv4 00:14:40.279 subtype: current discovery subsystem 00:14:40.279 treq: not required 00:14:40.279 portid: 0 00:14:40.279 trsvcid: 4420 00:14:40.279 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:40.279 traddr: 10.0.0.2 00:14:40.279 eflags: explicit discovery connections, duplicate discovery information 00:14:40.279 sectype: none 00:14:40.279 =====Discovery Log Entry 1====== 00:14:40.279 trtype: tcp 00:14:40.279 adrfam: ipv4 00:14:40.279 subtype: nvme subsystem 00:14:40.279 treq: not required 00:14:40.279 portid: 0 00:14:40.279 trsvcid: 4420 00:14:40.279 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:40.279 traddr: 10.0.0.2 00:14:40.279 eflags: none 00:14:40.279 sectype: none 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:40.279 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:40.280 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:40.280 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:40.280 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:40.280 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:40.280 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:41.657 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:41.657 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:41.657 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.657 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:41.657 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:41.657 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:43.563 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:43.563 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:43.563 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:43.563 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:43.563 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:43.563 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:43.563 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:43.563 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:43.563 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.563 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:43.822 /dev/nvme0n1 ]] 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:43.822 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.823 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.823 rmmod nvme_tcp 00:14:43.823 rmmod nvme_fabrics 00:14:43.823 rmmod nvme_keyring 00:14:43.823 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.823 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:43.823 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:43.823 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1498304 ']' 00:14:43.823 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1498304 00:14:43.823 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1498304 ']' 00:14:43.823 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1498304 00:14:43.823 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:43.823 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:43.823 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1498304 00:14:44.082 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:44.082 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:44.082 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1498304' 00:14:44.082 killing process with pid 1498304 00:14:44.082 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1498304 00:14:44.082 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1498304 00:14:44.341 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:44.341 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:44.341 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:44.341 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:44.341 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:44.341 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.341 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.341 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.248 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:46.248 00:14:46.248 real 0m13.779s 00:14:46.248 user 0m20.781s 00:14:46.248 sys 0m5.876s 00:14:46.248 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:46.248 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.248 ************************************ 00:14:46.248 END TEST nvmf_nvme_cli 00:14:46.248 ************************************ 00:14:46.248 19:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:46.248 19:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:46.248 19:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:46.248 19:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:46.248 19:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:46.507 ************************************ 00:14:46.507 START TEST nvmf_vfio_user 00:14:46.507 ************************************ 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:46.507 * Looking for test storage... 00:14:46.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.507 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1499719 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1499719' 00:14:46.508 Process pid: 1499719 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1499719 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1499719 ']' 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.508 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:46.508 [2024-07-24 19:15:32.690894] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:46.508 [2024-07-24 19:15:32.690949] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.508 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.766 [2024-07-24 19:15:32.764951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.766 [2024-07-24 19:15:32.834781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.766 [2024-07-24 19:15:32.834824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.767 [2024-07-24 19:15:32.834834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.767 [2024-07-24 19:15:32.834842] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.767 [2024-07-24 19:15:32.834849] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.767 [2024-07-24 19:15:32.834948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.767 [2024-07-24 19:15:32.835042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.767 [2024-07-24 19:15:32.835130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.767 [2024-07-24 19:15:32.835132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.356 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:47.356 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:47.356 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:48.295 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:48.554 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:48.554 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:48.554 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:48.554 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:48.554 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:48.813 Malloc1 00:14:48.813 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:49.072 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:49.072 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:49.331 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.331 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:49.331 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:49.590 Malloc2 00:14:49.590 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:49.590 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:49.850 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:50.110 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:50.110 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:50.111 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:50.111 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:50.111 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:50.111 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:50.111 [2024-07-24 19:15:36.173025] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:50.111 [2024-07-24 19:15:36.173064] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500307 ] 00:14:50.111 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.111 [2024-07-24 19:15:36.204048] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:50.111 [2024-07-24 19:15:36.214052] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:50.111 [2024-07-24 19:15:36.214071] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3d18da5000 00:14:50.111 [2024-07-24 19:15:36.215051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.111 [2024-07-24 19:15:36.216048] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.111 [2024-07-24 19:15:36.217054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.111 [2024-07-24 19:15:36.218058] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:50.111 [2024-07-24 19:15:36.219065] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:50.111 [2024-07-24 19:15:36.220073] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.111 [2024-07-24 19:15:36.221078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:50.111 [2024-07-24 19:15:36.222082] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.111 [2024-07-24 19:15:36.223088] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:50.111 [2024-07-24 19:15:36.223098] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3d18d9a000 00:14:50.111 [2024-07-24 19:15:36.223990] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:50.111 [2024-07-24 19:15:36.236288] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:50.111 [2024-07-24 19:15:36.236316] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:50.111 [2024-07-24 19:15:36.239190] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:50.111 [2024-07-24 19:15:36.239228] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:50.111 [2024-07-24 19:15:36.239300] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:50.111 [2024-07-24 19:15:36.239317] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:50.111 [2024-07-24 19:15:36.239324] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:50.111 [2024-07-24 19:15:36.240186] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:50.111 [2024-07-24 19:15:36.240199] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:50.111 [2024-07-24 19:15:36.240208] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:50.111 [2024-07-24 19:15:36.241189] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:50.111 [2024-07-24 19:15:36.241199] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:50.111 [2024-07-24 19:15:36.241208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:50.111 [2024-07-24 19:15:36.242198] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:50.111 [2024-07-24 19:15:36.242208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:50.111 [2024-07-24 19:15:36.243197] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:50.111 [2024-07-24 19:15:36.243206] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:50.111 [2024-07-24 19:15:36.243212] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:50.111 [2024-07-24 19:15:36.243220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:50.111 [2024-07-24 19:15:36.243327] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:50.111 [2024-07-24 19:15:36.243336] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:50.111 [2024-07-24 19:15:36.243343] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:50.111 [2024-07-24 19:15:36.244202] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:50.111 [2024-07-24 19:15:36.245210] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:50.111 [2024-07-24 19:15:36.246215] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:50.111 [2024-07-24 19:15:36.247215] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:50.111 [2024-07-24 19:15:36.250726] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:50.111 [2024-07-24 19:15:36.251248] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:50.111 [2024-07-24 19:15:36.251257] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:50.111 [2024-07-24 19:15:36.251263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:50.111 [2024-07-24 19:15:36.251282] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:50.111 [2024-07-24 19:15:36.251296] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:50.111 [2024-07-24 19:15:36.251314] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:50.111 [2024-07-24 19:15:36.251321] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.111 [2024-07-24 19:15:36.251325] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.111 [2024-07-24 19:15:36.251340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.111 [2024-07-24 19:15:36.251389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:50.111 [2024-07-24 19:15:36.251400] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:50.111 [2024-07-24 19:15:36.251406] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:50.111 [2024-07-24 19:15:36.251412] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:50.111 [2024-07-24 19:15:36.251418] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:50.111 [2024-07-24 19:15:36.251424] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:50.111 [2024-07-24 19:15:36.251431] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:50.111 [2024-07-24 19:15:36.251437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:50.111 [2024-07-24 19:15:36.251446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:50.111 [2024-07-24 19:15:36.251459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:50.111 [2024-07-24 19:15:36.251472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:50.111 [2024-07-24 19:15:36.251486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.111 [2024-07-24 19:15:36.251496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.111 [2024-07-24 19:15:36.251506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.111 [2024-07-24 19:15:36.251515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.112 [2024-07-24 19:15:36.251521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:50.112 [2024-07-24 19:15:36.251557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:50.112 [2024-07-24 19:15:36.251564] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:50.112 [2024-07-24 19:15:36.251571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251590] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:50.112 [2024-07-24 19:15:36.251611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:50.112 [2024-07-24 19:15:36.251661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251671] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251679] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:50.112 [2024-07-24 19:15:36.251685] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:50.112 [2024-07-24 19:15:36.251690] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.112 [2024-07-24 19:15:36.251697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:50.112 [2024-07-24 19:15:36.251710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:50.112 [2024-07-24 19:15:36.251724] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:50.112 [2024-07-24 19:15:36.251735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251753] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:50.112 [2024-07-24 19:15:36.251759] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.112 [2024-07-24 19:15:36.251764] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.112 [2024-07-24 19:15:36.251771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.112 [2024-07-24 19:15:36.251790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:50.112 [2024-07-24 19:15:36.251804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251821] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:50.112 [2024-07-24 19:15:36.251827] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.112 [2024-07-24 19:15:36.251832] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.112 [2024-07-24 19:15:36.251839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.112 [2024-07-24 19:15:36.251849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:50.112 [2024-07-24 19:15:36.251859] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251885] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251892] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251904] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:50.112 [2024-07-24 19:15:36.251911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:50.112 [2024-07-24 19:15:36.251917] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:50.112 [2024-07-24 19:15:36.251935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:50.112 [2024-07-24 19:15:36.251946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:50.112 [2024-07-24 19:15:36.251960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:50.112 [2024-07-24 19:15:36.251972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:50.112 [2024-07-24 19:15:36.251986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:50.112 [2024-07-24 19:15:36.251995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:50.112 [2024-07-24 19:15:36.252008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:50.112 [2024-07-24 19:15:36.252016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:50.112 [2024-07-24 19:15:36.252031] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:50.112 [2024-07-24 19:15:36.252038] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:50.112 [2024-07-24 19:15:36.252042] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:50.112 [2024-07-24 19:15:36.252047] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:50.112 [2024-07-24 19:15:36.252051] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:50.112 [2024-07-24 19:15:36.252059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:50.112 [2024-07-24 19:15:36.252067] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:50.112 [2024-07-24 19:15:36.252073] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:50.112 [2024-07-24 19:15:36.252078] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.112 [2024-07-24 19:15:36.252084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:50.112 [2024-07-24 19:15:36.252092] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:50.112 [2024-07-24 19:15:36.252098] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.112 [2024-07-24 19:15:36.252103] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.112 [2024-07-24 19:15:36.252110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.112 [2024-07-24 19:15:36.252118] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:50.112 [2024-07-24 19:15:36.252124] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:50.112 [2024-07-24 19:15:36.252128] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.112 [2024-07-24 19:15:36.252135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:50.112 [2024-07-24 19:15:36.252143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:50.112 [2024-07-24 19:15:36.252159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:50.112 [2024-07-24 19:15:36.252172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:50.112 [2024-07-24 19:15:36.252181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:50.112 ===================================================== 00:14:50.112 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:50.112 ===================================================== 00:14:50.112 Controller Capabilities/Features 00:14:50.112 ================================ 00:14:50.112 Vendor ID: 4e58 00:14:50.112 Subsystem Vendor ID: 4e58 00:14:50.113 Serial Number: SPDK1 00:14:50.113 Model Number: SPDK bdev Controller 00:14:50.113 Firmware Version: 24.09 00:14:50.113 Recommended Arb Burst: 6 00:14:50.113 IEEE OUI Identifier: 8d 6b 50 00:14:50.113 Multi-path I/O 00:14:50.113 May have multiple subsystem ports: Yes 00:14:50.113 May have multiple controllers: Yes 00:14:50.113 Associated with SR-IOV VF: No 00:14:50.113 Max Data Transfer Size: 131072 00:14:50.113 Max Number of Namespaces: 32 00:14:50.113 Max Number of I/O Queues: 127 00:14:50.113 NVMe Specification Version (VS): 1.3 00:14:50.113 NVMe Specification Version (Identify): 1.3 00:14:50.113 Maximum Queue Entries: 256 00:14:50.113 Contiguous Queues Required: Yes 00:14:50.113 Arbitration Mechanisms Supported 00:14:50.113 Weighted Round Robin: Not Supported 00:14:50.113 Vendor Specific: Not Supported 00:14:50.113 Reset Timeout: 15000 ms 00:14:50.113 Doorbell Stride: 4 bytes 00:14:50.113 NVM Subsystem Reset: Not Supported 00:14:50.113 Command Sets Supported 00:14:50.113 NVM Command Set: Supported 00:14:50.113 Boot Partition: Not Supported 00:14:50.113 Memory Page Size Minimum: 4096 bytes 00:14:50.113 Memory Page Size Maximum: 4096 bytes 00:14:50.113 Persistent Memory Region: Not Supported 00:14:50.113 Optional Asynchronous Events Supported 00:14:50.113 Namespace Attribute Notices: Supported 00:14:50.113 Firmware Activation Notices: Not Supported 00:14:50.113 ANA Change Notices: Not Supported 00:14:50.113 PLE Aggregate Log Change Notices: Not Supported 00:14:50.113 LBA Status Info Alert Notices: Not Supported 00:14:50.113 EGE Aggregate Log Change Notices: Not Supported 00:14:50.113 Normal NVM Subsystem Shutdown event: Not Supported 00:14:50.113 Zone Descriptor Change Notices: Not Supported 00:14:50.113 Discovery Log Change Notices: Not Supported 00:14:50.113 Controller Attributes 00:14:50.113 128-bit Host Identifier: Supported 00:14:50.113 Non-Operational Permissive Mode: Not Supported 00:14:50.113 NVM Sets: Not Supported 00:14:50.113 Read Recovery Levels: Not Supported 00:14:50.113 Endurance Groups: Not Supported 00:14:50.113 Predictable Latency Mode: Not Supported 00:14:50.113 Traffic Based Keep ALive: Not Supported 00:14:50.113 Namespace Granularity: Not Supported 00:14:50.113 SQ Associations: Not Supported 00:14:50.113 UUID List: Not Supported 00:14:50.113 Multi-Domain Subsystem: Not Supported 00:14:50.113 Fixed Capacity Management: Not Supported 00:14:50.113 Variable Capacity Management: Not Supported 00:14:50.113 Delete Endurance Group: Not Supported 00:14:50.113 Delete NVM Set: Not Supported 00:14:50.113 Extended LBA Formats Supported: Not Supported 00:14:50.113 Flexible Data Placement Supported: Not Supported 00:14:50.113 00:14:50.113 Controller Memory Buffer Support 00:14:50.113 ================================ 00:14:50.113 Supported: No 00:14:50.113 00:14:50.113 Persistent Memory Region Support 00:14:50.113 ================================ 00:14:50.113 Supported: No 00:14:50.113 00:14:50.113 Admin Command Set Attributes 00:14:50.113 ============================ 00:14:50.113 Security Send/Receive: Not Supported 00:14:50.113 Format NVM: Not Supported 00:14:50.113 Firmware Activate/Download: Not Supported 00:14:50.113 Namespace Management: Not Supported 00:14:50.113 Device Self-Test: Not Supported 00:14:50.113 Directives: Not Supported 00:14:50.113 NVMe-MI: Not Supported 00:14:50.113 Virtualization Management: Not Supported 00:14:50.113 Doorbell Buffer Config: Not Supported 00:14:50.113 Get LBA Status Capability: Not Supported 00:14:50.113 Command & Feature Lockdown Capability: Not Supported 00:14:50.113 Abort Command Limit: 4 00:14:50.113 Async Event Request Limit: 4 00:14:50.113 Number of Firmware Slots: N/A 00:14:50.113 Firmware Slot 1 Read-Only: N/A 00:14:50.113 Firmware Activation Without Reset: N/A 00:14:50.113 Multiple Update Detection Support: N/A 00:14:50.113 Firmware Update Granularity: No Information Provided 00:14:50.113 Per-Namespace SMART Log: No 00:14:50.113 Asymmetric Namespace Access Log Page: Not Supported 00:14:50.113 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:50.113 Command Effects Log Page: Supported 00:14:50.113 Get Log Page Extended Data: Supported 00:14:50.113 Telemetry Log Pages: Not Supported 00:14:50.113 Persistent Event Log Pages: Not Supported 00:14:50.113 Supported Log Pages Log Page: May Support 00:14:50.113 Commands Supported & Effects Log Page: Not Supported 00:14:50.113 Feature Identifiers & Effects Log Page:May Support 00:14:50.113 NVMe-MI Commands & Effects Log Page: May Support 00:14:50.113 Data Area 4 for Telemetry Log: Not Supported 00:14:50.113 Error Log Page Entries Supported: 128 00:14:50.113 Keep Alive: Supported 00:14:50.113 Keep Alive Granularity: 10000 ms 00:14:50.113 00:14:50.113 NVM Command Set Attributes 00:14:50.113 ========================== 00:14:50.113 Submission Queue Entry Size 00:14:50.113 Max: 64 00:14:50.113 Min: 64 00:14:50.113 Completion Queue Entry Size 00:14:50.113 Max: 16 00:14:50.113 Min: 16 00:14:50.113 Number of Namespaces: 32 00:14:50.113 Compare Command: Supported 00:14:50.113 Write Uncorrectable Command: Not Supported 00:14:50.113 Dataset Management Command: Supported 00:14:50.113 Write Zeroes Command: Supported 00:14:50.113 Set Features Save Field: Not Supported 00:14:50.113 Reservations: Not Supported 00:14:50.113 Timestamp: Not Supported 00:14:50.113 Copy: Supported 00:14:50.113 Volatile Write Cache: Present 00:14:50.113 Atomic Write Unit (Normal): 1 00:14:50.113 Atomic Write Unit (PFail): 1 00:14:50.113 Atomic Compare & Write Unit: 1 00:14:50.113 Fused Compare & Write: Supported 00:14:50.113 Scatter-Gather List 00:14:50.113 SGL Command Set: Supported (Dword aligned) 00:14:50.113 SGL Keyed: Not Supported 00:14:50.113 SGL Bit Bucket Descriptor: Not Supported 00:14:50.113 SGL Metadata Pointer: Not Supported 00:14:50.113 Oversized SGL: Not Supported 00:14:50.113 SGL Metadata Address: Not Supported 00:14:50.113 SGL Offset: Not Supported 00:14:50.113 Transport SGL Data Block: Not Supported 00:14:50.113 Replay Protected Memory Block: Not Supported 00:14:50.113 00:14:50.113 Firmware Slot Information 00:14:50.113 ========================= 00:14:50.113 Active slot: 1 00:14:50.113 Slot 1 Firmware Revision: 24.09 00:14:50.113 00:14:50.113 00:14:50.113 Commands Supported and Effects 00:14:50.113 ============================== 00:14:50.113 Admin Commands 00:14:50.113 -------------- 00:14:50.113 Get Log Page (02h): Supported 00:14:50.113 Identify (06h): Supported 00:14:50.113 Abort (08h): Supported 00:14:50.113 Set Features (09h): Supported 00:14:50.113 Get Features (0Ah): Supported 00:14:50.113 Asynchronous Event Request (0Ch): Supported 00:14:50.113 Keep Alive (18h): Supported 00:14:50.113 I/O Commands 00:14:50.113 ------------ 00:14:50.113 Flush (00h): Supported LBA-Change 00:14:50.113 Write (01h): Supported LBA-Change 00:14:50.113 Read (02h): Supported 00:14:50.113 Compare (05h): Supported 00:14:50.113 Write Zeroes (08h): Supported LBA-Change 00:14:50.113 Dataset Management (09h): Supported LBA-Change 00:14:50.113 Copy (19h): Supported LBA-Change 00:14:50.113 00:14:50.113 Error Log 00:14:50.113 ========= 00:14:50.113 00:14:50.113 Arbitration 00:14:50.113 =========== 00:14:50.113 Arbitration Burst: 1 00:14:50.113 00:14:50.113 Power Management 00:14:50.113 ================ 00:14:50.113 Number of Power States: 1 00:14:50.113 Current Power State: Power State #0 00:14:50.113 Power State #0: 00:14:50.113 Max Power: 0.00 W 00:14:50.113 Non-Operational State: Operational 00:14:50.113 Entry Latency: Not Reported 00:14:50.113 Exit Latency: Not Reported 00:14:50.113 Relative Read Throughput: 0 00:14:50.113 Relative Read Latency: 0 00:14:50.113 Relative Write Throughput: 0 00:14:50.113 Relative Write Latency: 0 00:14:50.113 Idle Power: Not Reported 00:14:50.113 Active Power: Not Reported 00:14:50.113 Non-Operational Permissive Mode: Not Supported 00:14:50.113 00:14:50.113 Health Information 00:14:50.114 ================== 00:14:50.114 Critical Warnings: 00:14:50.114 Available Spare Space: OK 00:14:50.114 Temperature: OK 00:14:50.114 Device Reliability: OK 00:14:50.114 Read Only: No 00:14:50.114 Volatile Memory Backup: OK 00:14:50.114 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:50.114 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:50.114 Available Spare: 0% 00:14:50.114 Available Sp[2024-07-24 19:15:36.252267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:50.114 [2024-07-24 19:15:36.252276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:50.114 [2024-07-24 19:15:36.252304] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:50.114 [2024-07-24 19:15:36.252317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.114 [2024-07-24 19:15:36.252325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.114 [2024-07-24 19:15:36.252333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.114 [2024-07-24 19:15:36.252341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.114 [2024-07-24 19:15:36.253262] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:50.114 [2024-07-24 19:15:36.253274] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:50.114 [2024-07-24 19:15:36.254261] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:50.114 [2024-07-24 19:15:36.254313] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:50.114 [2024-07-24 19:15:36.254320] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:50.114 [2024-07-24 19:15:36.255268] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:50.114 [2024-07-24 19:15:36.255281] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:50.114 [2024-07-24 19:15:36.255330] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:50.114 [2024-07-24 19:15:36.256295] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:50.114 are Threshold: 0% 00:14:50.114 Life Percentage Used: 0% 00:14:50.114 Data Units Read: 0 00:14:50.114 Data Units Written: 0 00:14:50.114 Host Read Commands: 0 00:14:50.114 Host Write Commands: 0 00:14:50.114 Controller Busy Time: 0 minutes 00:14:50.114 Power Cycles: 0 00:14:50.114 Power On Hours: 0 hours 00:14:50.114 Unsafe Shutdowns: 0 00:14:50.114 Unrecoverable Media Errors: 0 00:14:50.114 Lifetime Error Log Entries: 0 00:14:50.114 Warning Temperature Time: 0 minutes 00:14:50.114 Critical Temperature Time: 0 minutes 00:14:50.114 00:14:50.114 Number of Queues 00:14:50.114 ================ 00:14:50.114 Number of I/O Submission Queues: 127 00:14:50.114 Number of I/O Completion Queues: 127 00:14:50.114 00:14:50.114 Active Namespaces 00:14:50.114 ================= 00:14:50.114 Namespace ID:1 00:14:50.114 Error Recovery Timeout: Unlimited 00:14:50.114 Command Set Identifier: NVM (00h) 00:14:50.114 Deallocate: Supported 00:14:50.114 Deallocated/Unwritten Error: Not Supported 00:14:50.114 Deallocated Read Value: Unknown 00:14:50.114 Deallocate in Write Zeroes: Not Supported 00:14:50.114 Deallocated Guard Field: 0xFFFF 00:14:50.114 Flush: Supported 00:14:50.114 Reservation: Supported 00:14:50.114 Namespace Sharing Capabilities: Multiple Controllers 00:14:50.114 Size (in LBAs): 131072 (0GiB) 00:14:50.114 Capacity (in LBAs): 131072 (0GiB) 00:14:50.114 Utilization (in LBAs): 131072 (0GiB) 00:14:50.114 NGUID: 2CB7DF5E1AC744FB8EC54E30FE3955F6 00:14:50.114 UUID: 2cb7df5e-1ac7-44fb-8ec5-4e30fe3955f6 00:14:50.114 Thin Provisioning: Not Supported 00:14:50.114 Per-NS Atomic Units: Yes 00:14:50.114 Atomic Boundary Size (Normal): 0 00:14:50.114 Atomic Boundary Size (PFail): 0 00:14:50.114 Atomic Boundary Offset: 0 00:14:50.114 Maximum Single Source Range Length: 65535 00:14:50.114 Maximum Copy Length: 65535 00:14:50.114 Maximum Source Range Count: 1 00:14:50.114 NGUID/EUI64 Never Reused: No 00:14:50.114 Namespace Write Protected: No 00:14:50.114 Number of LBA Formats: 1 00:14:50.114 Current LBA Format: LBA Format #00 00:14:50.114 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:50.114 00:14:50.114 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:50.114 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.374 [2024-07-24 19:15:36.477687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.650 Initializing NVMe Controllers 00:14:55.650 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:55.650 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:55.650 Initialization complete. Launching workers. 00:14:55.650 ======================================================== 00:14:55.650 Latency(us) 00:14:55.650 Device Information : IOPS MiB/s Average min max 00:14:55.650 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39926.50 155.96 3205.72 909.13 7671.87 00:14:55.650 ======================================================== 00:14:55.650 Total : 39926.50 155.96 3205.72 909.13 7671.87 00:14:55.650 00:14:55.650 [2024-07-24 19:15:41.496006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.650 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:55.650 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.650 [2024-07-24 19:15:41.722066] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.991 Initializing NVMe Controllers 00:15:00.991 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.991 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:00.991 Initialization complete. Launching workers. 00:15:00.991 ======================================================== 00:15:00.991 Latency(us) 00:15:00.991 Device Information : IOPS MiB/s Average min max 00:15:00.991 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.99 62.65 7979.89 7601.80 8136.79 00:15:00.991 ======================================================== 00:15:00.991 Total : 16038.99 62.65 7979.89 7601.80 8136.79 00:15:00.991 00:15:00.991 [2024-07-24 19:15:46.756320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.991 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:00.991 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.991 [2024-07-24 19:15:46.970289] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.267 [2024-07-24 19:15:52.045045] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.267 Initializing NVMe Controllers 00:15:06.267 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.267 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:06.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:06.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:06.267 Initialization complete. Launching workers. 00:15:06.267 Starting thread on core 2 00:15:06.267 Starting thread on core 3 00:15:06.267 Starting thread on core 1 00:15:06.267 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:06.267 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.267 [2024-07-24 19:15:52.352172] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.558 [2024-07-24 19:15:55.422669] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.558 Initializing NVMe Controllers 00:15:09.558 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.558 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.558 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:09.558 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:09.558 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:09.558 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:09.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:09.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:09.558 Initialization complete. Launching workers. 00:15:09.558 Starting thread on core 1 with urgent priority queue 00:15:09.558 Starting thread on core 2 with urgent priority queue 00:15:09.558 Starting thread on core 3 with urgent priority queue 00:15:09.558 Starting thread on core 0 with urgent priority queue 00:15:09.558 SPDK bdev Controller (SPDK1 ) core 0: 7067.67 IO/s 14.15 secs/100000 ios 00:15:09.558 SPDK bdev Controller (SPDK1 ) core 1: 8773.67 IO/s 11.40 secs/100000 ios 00:15:09.558 SPDK bdev Controller (SPDK1 ) core 2: 7139.67 IO/s 14.01 secs/100000 ios 00:15:09.558 SPDK bdev Controller (SPDK1 ) core 3: 10722.00 IO/s 9.33 secs/100000 ios 00:15:09.558 ======================================================== 00:15:09.558 00:15:09.558 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:09.558 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.558 [2024-07-24 19:15:55.720177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.558 Initializing NVMe Controllers 00:15:09.558 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.558 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.558 Namespace ID: 1 size: 0GB 00:15:09.558 Initialization complete. 00:15:09.558 INFO: using host memory buffer for IO 00:15:09.558 Hello world! 00:15:09.558 [2024-07-24 19:15:55.754549] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.558 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:09.817 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.817 [2024-07-24 19:15:56.035013] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.195 Initializing NVMe Controllers 00:15:11.195 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.195 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.195 Initialization complete. Launching workers. 00:15:11.195 submit (in ns) avg, min, max = 6345.5, 3034.4, 3999059.2 00:15:11.195 complete (in ns) avg, min, max = 19621.9, 1660.0, 3997957.6 00:15:11.195 00:15:11.195 Submit histogram 00:15:11.195 ================ 00:15:11.195 Range in us Cumulative Count 00:15:11.195 3.034 - 3.046: 0.0118% ( 2) 00:15:11.195 3.046 - 3.059: 0.0235% ( 2) 00:15:11.195 3.059 - 3.072: 0.0353% ( 2) 00:15:11.195 3.072 - 3.085: 0.0471% ( 2) 00:15:11.195 3.085 - 3.098: 0.2059% ( 27) 00:15:11.195 3.098 - 3.110: 0.9530% ( 127) 00:15:11.195 3.110 - 3.123: 2.3354% ( 235) 00:15:11.195 3.123 - 3.136: 4.9415% ( 443) 00:15:11.195 3.136 - 3.149: 8.4829% ( 602) 00:15:11.195 3.149 - 3.162: 12.7302% ( 722) 00:15:11.195 3.162 - 3.174: 18.0658% ( 907) 00:15:11.195 3.174 - 3.187: 23.8308% ( 980) 00:15:11.196 3.187 - 3.200: 29.7017% ( 998) 00:15:11.196 3.200 - 3.213: 35.9021% ( 1054) 00:15:11.196 3.213 - 3.226: 42.9025% ( 1190) 00:15:11.196 3.226 - 3.238: 49.4500% ( 1113) 00:15:11.196 3.238 - 3.251: 53.7620% ( 733) 00:15:11.196 3.251 - 3.264: 57.0622% ( 561) 00:15:11.196 3.264 - 3.277: 60.1977% ( 533) 00:15:11.196 3.277 - 3.302: 66.1157% ( 1006) 00:15:11.196 3.302 - 3.328: 70.2688% ( 706) 00:15:11.196 3.328 - 3.354: 77.5516% ( 1238) 00:15:11.196 3.354 - 3.379: 83.7285% ( 1050) 00:15:11.196 3.379 - 3.405: 85.9580% ( 379) 00:15:11.196 3.405 - 3.430: 87.7169% ( 299) 00:15:11.196 3.430 - 3.456: 88.7817% ( 181) 00:15:11.196 3.456 - 3.482: 90.1465% ( 232) 00:15:11.196 3.482 - 3.507: 91.7819% ( 278) 00:15:11.196 3.507 - 3.533: 93.6232% ( 313) 00:15:11.196 3.533 - 3.558: 94.9232% ( 221) 00:15:11.196 3.558 - 3.584: 95.9998% ( 183) 00:15:11.196 3.584 - 3.610: 97.1469% ( 195) 00:15:11.196 3.610 - 3.635: 98.1293% ( 167) 00:15:11.196 3.635 - 3.661: 98.6764% ( 93) 00:15:11.196 3.661 - 3.686: 99.1764% ( 85) 00:15:11.196 3.686 - 3.712: 99.3588% ( 31) 00:15:11.196 3.712 - 3.738: 99.5000% ( 24) 00:15:11.196 3.738 - 3.763: 99.5588% ( 10) 00:15:11.196 3.763 - 3.789: 99.5882% ( 5) 00:15:11.196 3.789 - 3.814: 99.6176% ( 5) 00:15:11.196 5.632 - 5.658: 99.6235% ( 1) 00:15:11.196 5.734 - 5.760: 99.6294% ( 1) 00:15:11.196 5.862 - 5.888: 99.6353% ( 1) 00:15:11.196 5.888 - 5.914: 99.6412% ( 1) 00:15:11.196 6.016 - 6.042: 99.6470% ( 1) 00:15:11.196 6.118 - 6.144: 99.6529% ( 1) 00:15:11.196 6.144 - 6.170: 99.6647% ( 2) 00:15:11.196 6.272 - 6.298: 99.6706% ( 1) 00:15:11.196 6.298 - 6.323: 99.6765% ( 1) 00:15:11.196 6.323 - 6.349: 99.6882% ( 2) 00:15:11.196 6.349 - 6.374: 99.6941% ( 1) 00:15:11.196 6.374 - 6.400: 99.7000% ( 1) 00:15:11.196 6.426 - 6.451: 99.7059% ( 1) 00:15:11.196 6.502 - 6.528: 99.7117% ( 1) 00:15:11.196 6.528 - 6.554: 99.7176% ( 1) 00:15:11.196 6.605 - 6.656: 99.7235% ( 1) 00:15:11.196 6.707 - 6.758: 99.7412% ( 3) 00:15:11.196 6.758 - 6.810: 99.7470% ( 1) 00:15:11.196 6.810 - 6.861: 99.7529% ( 1) 00:15:11.196 6.861 - 6.912: 99.7765% ( 4) 00:15:11.196 6.912 - 6.963: 99.7823% ( 1) 00:15:11.196 6.963 - 7.014: 99.7941% ( 2) 00:15:11.196 7.066 - 7.117: 99.8118% ( 3) 00:15:11.196 7.117 - 7.168: 99.8294% ( 3) 00:15:11.196 7.168 - 7.219: 99.8470% ( 3) 00:15:11.196 7.270 - 7.322: 99.8588% ( 2) 00:15:11.196 7.322 - 7.373: 99.8647% ( 1) 00:15:11.196 7.373 - 7.424: 99.8706% ( 1) 00:15:11.196 7.629 - 7.680: 99.8765% ( 1) 00:15:11.196 7.731 - 7.782: 99.8823% ( 1) 00:15:11.196 8.090 - 8.141: 99.8882% ( 1) 00:15:11.196 8.192 - 8.243: 99.8941% ( 1) 00:15:11.196 8.346 - 8.397: 99.9000% ( 1) 00:15:11.196 9.216 - 9.267: 99.9059% ( 1) 00:15:11.196 9.370 - 9.421: 99.9118% ( 1) 00:15:11.196 10.598 - 10.650: 99.9176% ( 1) 00:15:11.196 11.520 - 11.571: 99.9235% ( 1) 00:15:11.196 3984.589 - 4010.803: 100.0000% ( 13) 00:15:11.196 00:15:11.196 Complete histogram 00:15:11.196 ================== 00:15:11.196 Ra[2024-07-24 19:15:57.057114] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.196 nge in us Cumulative Count 00:15:11.196 1.651 - 1.664: 0.0059% ( 1) 00:15:11.196 1.664 - 1.677: 0.0824% ( 13) 00:15:11.196 1.677 - 1.690: 0.1294% ( 8) 00:15:11.196 1.690 - 1.702: 0.1471% ( 3) 00:15:11.196 1.702 - 1.715: 1.4471% ( 221) 00:15:11.196 1.715 - 1.728: 8.7064% ( 1234) 00:15:11.196 1.728 - 1.741: 12.3184% ( 614) 00:15:11.196 1.741 - 1.754: 13.1361% ( 139) 00:15:11.196 1.754 - 1.766: 17.0539% ( 666) 00:15:11.196 1.766 - 1.779: 58.2564% ( 7004) 00:15:11.196 1.779 - 1.792: 89.1700% ( 5255) 00:15:11.196 1.792 - 1.805: 95.1174% ( 1011) 00:15:11.196 1.805 - 1.818: 97.4293% ( 393) 00:15:11.196 1.818 - 1.830: 97.9411% ( 87) 00:15:11.196 1.830 - 1.843: 98.2999% ( 61) 00:15:11.196 1.843 - 1.856: 98.7705% ( 80) 00:15:11.196 1.856 - 1.869: 99.1176% ( 59) 00:15:11.196 1.869 - 1.882: 99.2529% ( 23) 00:15:11.196 1.882 - 1.894: 99.3058% ( 9) 00:15:11.196 1.894 - 1.907: 99.3411% ( 6) 00:15:11.196 1.907 - 1.920: 99.3470% ( 1) 00:15:11.196 1.920 - 1.933: 99.3647% ( 3) 00:15:11.196 1.933 - 1.946: 99.3706% ( 1) 00:15:11.196 1.958 - 1.971: 99.3764% ( 1) 00:15:11.196 1.971 - 1.984: 99.3882% ( 2) 00:15:11.196 2.150 - 2.163: 99.3941% ( 1) 00:15:11.196 3.942 - 3.968: 99.4000% ( 1) 00:15:11.196 4.096 - 4.122: 99.4058% ( 1) 00:15:11.196 4.531 - 4.557: 99.4117% ( 1) 00:15:11.196 4.736 - 4.762: 99.4235% ( 2) 00:15:11.196 4.787 - 4.813: 99.4353% ( 2) 00:15:11.196 4.813 - 4.838: 99.4411% ( 1) 00:15:11.196 4.838 - 4.864: 99.4470% ( 1) 00:15:11.196 4.915 - 4.941: 99.4529% ( 1) 00:15:11.196 5.069 - 5.094: 99.4588% ( 1) 00:15:11.196 5.094 - 5.120: 99.4647% ( 1) 00:15:11.196 5.146 - 5.171: 99.4706% ( 1) 00:15:11.196 5.197 - 5.222: 99.4764% ( 1) 00:15:11.196 5.274 - 5.299: 99.4823% ( 1) 00:15:11.196 5.325 - 5.350: 99.4882% ( 1) 00:15:11.196 5.709 - 5.734: 99.5000% ( 2) 00:15:11.196 5.862 - 5.888: 99.5059% ( 1) 00:15:11.196 5.990 - 6.016: 99.5117% ( 1) 00:15:11.196 6.067 - 6.093: 99.5176% ( 1) 00:15:11.196 6.221 - 6.246: 99.5235% ( 1) 00:15:11.196 6.246 - 6.272: 99.5294% ( 1) 00:15:11.196 7.014 - 7.066: 99.5353% ( 1) 00:15:11.196 7.117 - 7.168: 99.5411% ( 1) 00:15:11.196 7.322 - 7.373: 99.5470% ( 1) 00:15:11.196 12.083 - 12.134: 99.5529% ( 1) 00:15:11.196 3670.016 - 3696.230: 99.5588% ( 1) 00:15:11.196 3984.589 - 4010.803: 100.0000% ( 75) 00:15:11.196 00:15:11.196 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:11.196 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:11.196 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:11.196 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:11.196 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.196 [ 00:15:11.196 { 00:15:11.196 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.196 "subtype": "Discovery", 00:15:11.196 "listen_addresses": [], 00:15:11.196 "allow_any_host": true, 00:15:11.196 "hosts": [] 00:15:11.196 }, 00:15:11.196 { 00:15:11.196 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.196 "subtype": "NVMe", 00:15:11.196 "listen_addresses": [ 00:15:11.196 { 00:15:11.196 "trtype": "VFIOUSER", 00:15:11.196 "adrfam": "IPv4", 00:15:11.196 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.196 "trsvcid": "0" 00:15:11.196 } 00:15:11.196 ], 00:15:11.196 "allow_any_host": true, 00:15:11.196 "hosts": [], 00:15:11.196 "serial_number": "SPDK1", 00:15:11.196 "model_number": "SPDK bdev Controller", 00:15:11.196 "max_namespaces": 32, 00:15:11.196 "min_cntlid": 1, 00:15:11.196 "max_cntlid": 65519, 00:15:11.196 "namespaces": [ 00:15:11.196 { 00:15:11.196 "nsid": 1, 00:15:11.196 "bdev_name": "Malloc1", 00:15:11.196 "name": "Malloc1", 00:15:11.196 "nguid": "2CB7DF5E1AC744FB8EC54E30FE3955F6", 00:15:11.196 "uuid": "2cb7df5e-1ac7-44fb-8ec5-4e30fe3955f6" 00:15:11.196 } 00:15:11.196 ] 00:15:11.196 }, 00:15:11.196 { 00:15:11.196 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.196 "subtype": "NVMe", 00:15:11.196 "listen_addresses": [ 00:15:11.196 { 00:15:11.196 "trtype": "VFIOUSER", 00:15:11.196 "adrfam": "IPv4", 00:15:11.196 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.196 "trsvcid": "0" 00:15:11.196 } 00:15:11.197 ], 00:15:11.197 "allow_any_host": true, 00:15:11.197 "hosts": [], 00:15:11.197 "serial_number": "SPDK2", 00:15:11.197 "model_number": "SPDK bdev Controller", 00:15:11.197 "max_namespaces": 32, 00:15:11.197 "min_cntlid": 1, 00:15:11.197 "max_cntlid": 65519, 00:15:11.197 "namespaces": [ 00:15:11.197 { 00:15:11.197 "nsid": 1, 00:15:11.197 "bdev_name": "Malloc2", 00:15:11.197 "name": "Malloc2", 00:15:11.197 "nguid": "C8B36FF2270C4E7CB1322C5FE2DFA11B", 00:15:11.197 "uuid": "c8b36ff2-270c-4e7c-b132-2c5fe2dfa11b" 00:15:11.197 } 00:15:11.197 ] 00:15:11.197 } 00:15:11.197 ] 00:15:11.197 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:11.197 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:11.197 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1503747 00:15:11.197 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:11.197 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:11.197 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.197 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.197 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:11.197 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:11.197 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:11.197 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.456 [2024-07-24 19:15:57.446146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.456 Malloc3 00:15:11.456 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:11.456 [2024-07-24 19:15:57.661834] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.456 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.715 Asynchronous Event Request test 00:15:11.715 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.715 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.715 Registering asynchronous event callbacks... 00:15:11.715 Starting namespace attribute notice tests for all controllers... 00:15:11.715 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:11.715 aer_cb - Changed Namespace 00:15:11.715 Cleaning up... 00:15:11.715 [ 00:15:11.715 { 00:15:11.715 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.715 "subtype": "Discovery", 00:15:11.715 "listen_addresses": [], 00:15:11.715 "allow_any_host": true, 00:15:11.715 "hosts": [] 00:15:11.715 }, 00:15:11.715 { 00:15:11.716 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.716 "subtype": "NVMe", 00:15:11.716 "listen_addresses": [ 00:15:11.716 { 00:15:11.716 "trtype": "VFIOUSER", 00:15:11.716 "adrfam": "IPv4", 00:15:11.716 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.716 "trsvcid": "0" 00:15:11.716 } 00:15:11.716 ], 00:15:11.716 "allow_any_host": true, 00:15:11.716 "hosts": [], 00:15:11.716 "serial_number": "SPDK1", 00:15:11.716 "model_number": "SPDK bdev Controller", 00:15:11.716 "max_namespaces": 32, 00:15:11.716 "min_cntlid": 1, 00:15:11.716 "max_cntlid": 65519, 00:15:11.716 "namespaces": [ 00:15:11.716 { 00:15:11.716 "nsid": 1, 00:15:11.716 "bdev_name": "Malloc1", 00:15:11.716 "name": "Malloc1", 00:15:11.716 "nguid": "2CB7DF5E1AC744FB8EC54E30FE3955F6", 00:15:11.716 "uuid": "2cb7df5e-1ac7-44fb-8ec5-4e30fe3955f6" 00:15:11.716 }, 00:15:11.716 { 00:15:11.716 "nsid": 2, 00:15:11.716 "bdev_name": "Malloc3", 00:15:11.716 "name": "Malloc3", 00:15:11.716 "nguid": "FC03878D36314923B1540872588A58FD", 00:15:11.716 "uuid": "fc03878d-3631-4923-b154-0872588a58fd" 00:15:11.716 } 00:15:11.716 ] 00:15:11.716 }, 00:15:11.716 { 00:15:11.716 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.716 "subtype": "NVMe", 00:15:11.716 "listen_addresses": [ 00:15:11.716 { 00:15:11.716 "trtype": "VFIOUSER", 00:15:11.716 "adrfam": "IPv4", 00:15:11.716 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.716 "trsvcid": "0" 00:15:11.716 } 00:15:11.716 ], 00:15:11.716 "allow_any_host": true, 00:15:11.716 "hosts": [], 00:15:11.716 "serial_number": "SPDK2", 00:15:11.716 "model_number": "SPDK bdev Controller", 00:15:11.716 "max_namespaces": 32, 00:15:11.716 "min_cntlid": 1, 00:15:11.716 "max_cntlid": 65519, 00:15:11.716 "namespaces": [ 00:15:11.716 { 00:15:11.716 "nsid": 1, 00:15:11.716 "bdev_name": "Malloc2", 00:15:11.716 "name": "Malloc2", 00:15:11.716 "nguid": "C8B36FF2270C4E7CB1322C5FE2DFA11B", 00:15:11.716 "uuid": "c8b36ff2-270c-4e7c-b132-2c5fe2dfa11b" 00:15:11.716 } 00:15:11.716 ] 00:15:11.716 } 00:15:11.716 ] 00:15:11.716 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1503747 00:15:11.716 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:11.716 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:11.716 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:11.716 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:11.716 [2024-07-24 19:15:57.904515] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:11.716 [2024-07-24 19:15:57.904554] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503983 ] 00:15:11.716 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.716 [2024-07-24 19:15:57.935934] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:11.716 [2024-07-24 19:15:57.945973] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:11.716 [2024-07-24 19:15:57.945994] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc70cdc6000 00:15:11.716 [2024-07-24 19:15:57.946971] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.716 [2024-07-24 19:15:57.947974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.716 [2024-07-24 19:15:57.948977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.716 [2024-07-24 19:15:57.949984] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.716 [2024-07-24 19:15:57.950991] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.716 [2024-07-24 19:15:57.952001] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.716 [2024-07-24 19:15:57.953010] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.716 [2024-07-24 19:15:57.954021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.976 [2024-07-24 19:15:57.955028] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:11.976 [2024-07-24 19:15:57.955040] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc70cdbb000 00:15:11.976 [2024-07-24 19:15:57.955932] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:11.976 [2024-07-24 19:15:57.968138] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:11.976 [2024-07-24 19:15:57.968158] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:11.976 [2024-07-24 19:15:57.970208] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:11.976 [2024-07-24 19:15:57.970246] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:11.977 [2024-07-24 19:15:57.970316] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:11.977 [2024-07-24 19:15:57.970334] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:11.977 [2024-07-24 19:15:57.970340] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:11.977 [2024-07-24 19:15:57.971214] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:11.977 [2024-07-24 19:15:57.971227] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:11.977 [2024-07-24 19:15:57.971236] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:11.977 [2024-07-24 19:15:57.972219] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:11.977 [2024-07-24 19:15:57.972229] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:11.977 [2024-07-24 19:15:57.972238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:11.977 [2024-07-24 19:15:57.973229] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:11.977 [2024-07-24 19:15:57.973240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:11.977 [2024-07-24 19:15:57.974234] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:11.977 [2024-07-24 19:15:57.974245] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:11.977 [2024-07-24 19:15:57.974251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:11.977 [2024-07-24 19:15:57.974260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:11.977 [2024-07-24 19:15:57.974366] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:11.977 [2024-07-24 19:15:57.974373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:11.977 [2024-07-24 19:15:57.974379] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:11.977 [2024-07-24 19:15:57.975247] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:11.977 [2024-07-24 19:15:57.976252] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:11.977 [2024-07-24 19:15:57.977261] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:11.977 [2024-07-24 19:15:57.978262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.977 [2024-07-24 19:15:57.978304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:11.977 [2024-07-24 19:15:57.979281] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:11.977 [2024-07-24 19:15:57.979293] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:11.977 [2024-07-24 19:15:57.979300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:11.977 [2024-07-24 19:15:57.979319] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:11.977 [2024-07-24 19:15:57.979332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:11.977 [2024-07-24 19:15:57.979345] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.977 [2024-07-24 19:15:57.979352] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.977 [2024-07-24 19:15:57.979356] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:11.977 [2024-07-24 19:15:57.979369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.977 [2024-07-24 19:15:57.985722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:11.977 [2024-07-24 19:15:57.985735] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:11.977 [2024-07-24 19:15:57.985742] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:11.977 [2024-07-24 19:15:57.985748] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:11.977 [2024-07-24 19:15:57.985753] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:11.977 [2024-07-24 19:15:57.985760] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:11.977 [2024-07-24 19:15:57.985766] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:11.977 [2024-07-24 19:15:57.985772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:11.977 [2024-07-24 19:15:57.985780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:11.977 [2024-07-24 19:15:57.985794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:11.977 [2024-07-24 19:15:57.993721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:11.977 [2024-07-24 19:15:57.993736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.977 [2024-07-24 19:15:57.993746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.977 [2024-07-24 19:15:57.993755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.977 [2024-07-24 19:15:57.993764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.977 [2024-07-24 19:15:57.993770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:11.977 [2024-07-24 19:15:57.993780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:11.977 [2024-07-24 19:15:57.993790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:11.977 [2024-07-24 19:15:58.001720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:11.977 [2024-07-24 19:15:58.001729] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:11.977 [2024-07-24 19:15:58.001735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:11.977 [2024-07-24 19:15:58.001746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:11.977 [2024-07-24 19:15:58.001754] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:11.977 [2024-07-24 19:15:58.001763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:11.977 [2024-07-24 19:15:58.009721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:11.977 [2024-07-24 19:15:58.009775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:11.977 [2024-07-24 19:15:58.009784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:11.977 [2024-07-24 19:15:58.009792] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:11.978 [2024-07-24 19:15:58.009798] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:11.978 [2024-07-24 19:15:58.009803] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:11.978 [2024-07-24 19:15:58.009810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:11.978 [2024-07-24 19:15:58.017721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:11.978 [2024-07-24 19:15:58.017734] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:11.978 [2024-07-24 19:15:58.017744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:11.978 [2024-07-24 19:15:58.017753] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:11.978 [2024-07-24 19:15:58.017761] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.978 [2024-07-24 19:15:58.017767] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.978 [2024-07-24 19:15:58.017771] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:11.978 [2024-07-24 19:15:58.017778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.978 [2024-07-24 19:15:58.025720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:11.978 [2024-07-24 19:15:58.025737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:11.978 [2024-07-24 19:15:58.025747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:11.978 [2024-07-24 19:15:58.025755] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.978 [2024-07-24 19:15:58.025761] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.978 [2024-07-24 19:15:58.025767] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:11.978 [2024-07-24 19:15:58.025775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.978 [2024-07-24 19:15:58.033719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:11.978 [2024-07-24 19:15:58.033730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:11.978 [2024-07-24 19:15:58.033739] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:11.978 [2024-07-24 19:15:58.033748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:11.978 [2024-07-24 19:15:58.033756] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:11.978 [2024-07-24 19:15:58.033763] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:11.978 [2024-07-24 19:15:58.033769] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:11.978 [2024-07-24 19:15:58.033775] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:11.978 [2024-07-24 19:15:58.033781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:11.978 [2024-07-24 19:15:58.033787] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:11.978 [2024-07-24 19:15:58.033803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:11.978 [2024-07-24 19:15:58.041719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:11.978 [2024-07-24 19:15:58.041734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:11.978 [2024-07-24 19:15:58.049719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:11.978 [2024-07-24 19:15:58.049734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:11.978 [2024-07-24 19:15:58.057720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:11.978 [2024-07-24 19:15:58.057734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:11.978 [2024-07-24 19:15:58.065723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:11.978 [2024-07-24 19:15:58.065743] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:11.978 [2024-07-24 19:15:58.065749] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:11.978 [2024-07-24 19:15:58.065754] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:11.978 [2024-07-24 19:15:58.065758] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:11.978 [2024-07-24 19:15:58.065763] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:11.978 [2024-07-24 19:15:58.065770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:11.978 [2024-07-24 19:15:58.065780] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:11.978 [2024-07-24 19:15:58.065786] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:11.978 [2024-07-24 19:15:58.065790] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:11.978 [2024-07-24 19:15:58.065797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:11.978 [2024-07-24 19:15:58.065805] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:11.978 [2024-07-24 19:15:58.065810] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.978 [2024-07-24 19:15:58.065815] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:11.978 [2024-07-24 19:15:58.065821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.978 [2024-07-24 19:15:58.065829] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:11.978 [2024-07-24 19:15:58.065835] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:11.978 [2024-07-24 19:15:58.065839] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:11.978 [2024-07-24 19:15:58.065846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:11.978 [2024-07-24 19:15:58.073722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:11.978 [2024-07-24 19:15:58.073739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:11.978 [2024-07-24 19:15:58.073752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:11.978 [2024-07-24 19:15:58.073760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:11.978 ===================================================== 00:15:11.978 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:11.978 ===================================================== 00:15:11.978 Controller Capabilities/Features 00:15:11.978 ================================ 00:15:11.978 Vendor ID: 4e58 00:15:11.978 Subsystem Vendor ID: 4e58 00:15:11.978 Serial Number: SPDK2 00:15:11.978 Model Number: SPDK bdev Controller 00:15:11.978 Firmware Version: 24.09 00:15:11.978 Recommended Arb Burst: 6 00:15:11.978 IEEE OUI Identifier: 8d 6b 50 00:15:11.978 Multi-path I/O 00:15:11.978 May have multiple subsystem ports: Yes 00:15:11.979 May have multiple controllers: Yes 00:15:11.979 Associated with SR-IOV VF: No 00:15:11.979 Max Data Transfer Size: 131072 00:15:11.979 Max Number of Namespaces: 32 00:15:11.979 Max Number of I/O Queues: 127 00:15:11.979 NVMe Specification Version (VS): 1.3 00:15:11.979 NVMe Specification Version (Identify): 1.3 00:15:11.979 Maximum Queue Entries: 256 00:15:11.979 Contiguous Queues Required: Yes 00:15:11.979 Arbitration Mechanisms Supported 00:15:11.979 Weighted Round Robin: Not Supported 00:15:11.979 Vendor Specific: Not Supported 00:15:11.979 Reset Timeout: 15000 ms 00:15:11.979 Doorbell Stride: 4 bytes 00:15:11.979 NVM Subsystem Reset: Not Supported 00:15:11.979 Command Sets Supported 00:15:11.979 NVM Command Set: Supported 00:15:11.979 Boot Partition: Not Supported 00:15:11.979 Memory Page Size Minimum: 4096 bytes 00:15:11.979 Memory Page Size Maximum: 4096 bytes 00:15:11.979 Persistent Memory Region: Not Supported 00:15:11.979 Optional Asynchronous Events Supported 00:15:11.979 Namespace Attribute Notices: Supported 00:15:11.979 Firmware Activation Notices: Not Supported 00:15:11.979 ANA Change Notices: Not Supported 00:15:11.979 PLE Aggregate Log Change Notices: Not Supported 00:15:11.979 LBA Status Info Alert Notices: Not Supported 00:15:11.979 EGE Aggregate Log Change Notices: Not Supported 00:15:11.979 Normal NVM Subsystem Shutdown event: Not Supported 00:15:11.979 Zone Descriptor Change Notices: Not Supported 00:15:11.979 Discovery Log Change Notices: Not Supported 00:15:11.979 Controller Attributes 00:15:11.979 128-bit Host Identifier: Supported 00:15:11.979 Non-Operational Permissive Mode: Not Supported 00:15:11.979 NVM Sets: Not Supported 00:15:11.979 Read Recovery Levels: Not Supported 00:15:11.979 Endurance Groups: Not Supported 00:15:11.979 Predictable Latency Mode: Not Supported 00:15:11.979 Traffic Based Keep ALive: Not Supported 00:15:11.979 Namespace Granularity: Not Supported 00:15:11.979 SQ Associations: Not Supported 00:15:11.979 UUID List: Not Supported 00:15:11.979 Multi-Domain Subsystem: Not Supported 00:15:11.979 Fixed Capacity Management: Not Supported 00:15:11.979 Variable Capacity Management: Not Supported 00:15:11.979 Delete Endurance Group: Not Supported 00:15:11.979 Delete NVM Set: Not Supported 00:15:11.979 Extended LBA Formats Supported: Not Supported 00:15:11.979 Flexible Data Placement Supported: Not Supported 00:15:11.979 00:15:11.979 Controller Memory Buffer Support 00:15:11.979 ================================ 00:15:11.979 Supported: No 00:15:11.979 00:15:11.979 Persistent Memory Region Support 00:15:11.979 ================================ 00:15:11.979 Supported: No 00:15:11.979 00:15:11.979 Admin Command Set Attributes 00:15:11.979 ============================ 00:15:11.979 Security Send/Receive: Not Supported 00:15:11.979 Format NVM: Not Supported 00:15:11.979 Firmware Activate/Download: Not Supported 00:15:11.979 Namespace Management: Not Supported 00:15:11.979 Device Self-Test: Not Supported 00:15:11.979 Directives: Not Supported 00:15:11.979 NVMe-MI: Not Supported 00:15:11.979 Virtualization Management: Not Supported 00:15:11.979 Doorbell Buffer Config: Not Supported 00:15:11.979 Get LBA Status Capability: Not Supported 00:15:11.979 Command & Feature Lockdown Capability: Not Supported 00:15:11.979 Abort Command Limit: 4 00:15:11.979 Async Event Request Limit: 4 00:15:11.979 Number of Firmware Slots: N/A 00:15:11.979 Firmware Slot 1 Read-Only: N/A 00:15:11.979 Firmware Activation Without Reset: N/A 00:15:11.979 Multiple Update Detection Support: N/A 00:15:11.979 Firmware Update Granularity: No Information Provided 00:15:11.979 Per-Namespace SMART Log: No 00:15:11.979 Asymmetric Namespace Access Log Page: Not Supported 00:15:11.979 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:11.979 Command Effects Log Page: Supported 00:15:11.979 Get Log Page Extended Data: Supported 00:15:11.979 Telemetry Log Pages: Not Supported 00:15:11.979 Persistent Event Log Pages: Not Supported 00:15:11.979 Supported Log Pages Log Page: May Support 00:15:11.979 Commands Supported & Effects Log Page: Not Supported 00:15:11.979 Feature Identifiers & Effects Log Page:May Support 00:15:11.979 NVMe-MI Commands & Effects Log Page: May Support 00:15:11.979 Data Area 4 for Telemetry Log: Not Supported 00:15:11.979 Error Log Page Entries Supported: 128 00:15:11.979 Keep Alive: Supported 00:15:11.979 Keep Alive Granularity: 10000 ms 00:15:11.979 00:15:11.979 NVM Command Set Attributes 00:15:11.979 ========================== 00:15:11.979 Submission Queue Entry Size 00:15:11.979 Max: 64 00:15:11.979 Min: 64 00:15:11.979 Completion Queue Entry Size 00:15:11.979 Max: 16 00:15:11.979 Min: 16 00:15:11.979 Number of Namespaces: 32 00:15:11.979 Compare Command: Supported 00:15:11.979 Write Uncorrectable Command: Not Supported 00:15:11.979 Dataset Management Command: Supported 00:15:11.979 Write Zeroes Command: Supported 00:15:11.979 Set Features Save Field: Not Supported 00:15:11.979 Reservations: Not Supported 00:15:11.979 Timestamp: Not Supported 00:15:11.979 Copy: Supported 00:15:11.979 Volatile Write Cache: Present 00:15:11.979 Atomic Write Unit (Normal): 1 00:15:11.979 Atomic Write Unit (PFail): 1 00:15:11.979 Atomic Compare & Write Unit: 1 00:15:11.979 Fused Compare & Write: Supported 00:15:11.979 Scatter-Gather List 00:15:11.979 SGL Command Set: Supported (Dword aligned) 00:15:11.979 SGL Keyed: Not Supported 00:15:11.979 SGL Bit Bucket Descriptor: Not Supported 00:15:11.979 SGL Metadata Pointer: Not Supported 00:15:11.979 Oversized SGL: Not Supported 00:15:11.979 SGL Metadata Address: Not Supported 00:15:11.979 SGL Offset: Not Supported 00:15:11.979 Transport SGL Data Block: Not Supported 00:15:11.979 Replay Protected Memory Block: Not Supported 00:15:11.979 00:15:11.979 Firmware Slot Information 00:15:11.979 ========================= 00:15:11.979 Active slot: 1 00:15:11.979 Slot 1 Firmware Revision: 24.09 00:15:11.979 00:15:11.979 00:15:11.979 Commands Supported and Effects 00:15:11.979 ============================== 00:15:11.979 Admin Commands 00:15:11.979 -------------- 00:15:11.979 Get Log Page (02h): Supported 00:15:11.979 Identify (06h): Supported 00:15:11.979 Abort (08h): Supported 00:15:11.979 Set Features (09h): Supported 00:15:11.979 Get Features (0Ah): Supported 00:15:11.979 Asynchronous Event Request (0Ch): Supported 00:15:11.979 Keep Alive (18h): Supported 00:15:11.979 I/O Commands 00:15:11.979 ------------ 00:15:11.979 Flush (00h): Supported LBA-Change 00:15:11.979 Write (01h): Supported LBA-Change 00:15:11.979 Read (02h): Supported 00:15:11.979 Compare (05h): Supported 00:15:11.979 Write Zeroes (08h): Supported LBA-Change 00:15:11.979 Dataset Management (09h): Supported LBA-Change 00:15:11.979 Copy (19h): Supported LBA-Change 00:15:11.979 00:15:11.979 Error Log 00:15:11.979 ========= 00:15:11.979 00:15:11.979 Arbitration 00:15:11.979 =========== 00:15:11.979 Arbitration Burst: 1 00:15:11.979 00:15:11.979 Power Management 00:15:11.979 ================ 00:15:11.979 Number of Power States: 1 00:15:11.979 Current Power State: Power State #0 00:15:11.979 Power State #0: 00:15:11.979 Max Power: 0.00 W 00:15:11.979 Non-Operational State: Operational 00:15:11.980 Entry Latency: Not Reported 00:15:11.980 Exit Latency: Not Reported 00:15:11.980 Relative Read Throughput: 0 00:15:11.980 Relative Read Latency: 0 00:15:11.980 Relative Write Throughput: 0 00:15:11.980 Relative Write Latency: 0 00:15:11.980 Idle Power: Not Reported 00:15:11.980 Active Power: Not Reported 00:15:11.980 Non-Operational Permissive Mode: Not Supported 00:15:11.980 00:15:11.980 Health Information 00:15:11.980 ================== 00:15:11.980 Critical Warnings: 00:15:11.980 Available Spare Space: OK 00:15:11.980 Temperature: OK 00:15:11.980 Device Reliability: OK 00:15:11.980 Read Only: No 00:15:11.980 Volatile Memory Backup: OK 00:15:11.980 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:11.980 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:11.980 Available Spare: 0% 00:15:11.980 Available Sp[2024-07-24 19:15:58.073848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:11.980 [2024-07-24 19:15:58.081721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:11.980 [2024-07-24 19:15:58.081751] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:11.980 [2024-07-24 19:15:58.081761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.980 [2024-07-24 19:15:58.081769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.980 [2024-07-24 19:15:58.081777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.980 [2024-07-24 19:15:58.081785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.980 [2024-07-24 19:15:58.085721] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:11.980 [2024-07-24 19:15:58.085734] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:11.980 [2024-07-24 19:15:58.085860] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.980 [2024-07-24 19:15:58.085906] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:11.980 [2024-07-24 19:15:58.085913] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:11.980 [2024-07-24 19:15:58.086872] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:11.980 [2024-07-24 19:15:58.086886] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:11.980 [2024-07-24 19:15:58.086933] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:11.980 [2024-07-24 19:15:58.087890] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:11.980 are Threshold: 0% 00:15:11.980 Life Percentage Used: 0% 00:15:11.980 Data Units Read: 0 00:15:11.980 Data Units Written: 0 00:15:11.980 Host Read Commands: 0 00:15:11.980 Host Write Commands: 0 00:15:11.980 Controller Busy Time: 0 minutes 00:15:11.980 Power Cycles: 0 00:15:11.980 Power On Hours: 0 hours 00:15:11.980 Unsafe Shutdowns: 0 00:15:11.980 Unrecoverable Media Errors: 0 00:15:11.980 Lifetime Error Log Entries: 0 00:15:11.980 Warning Temperature Time: 0 minutes 00:15:11.980 Critical Temperature Time: 0 minutes 00:15:11.980 00:15:11.980 Number of Queues 00:15:11.980 ================ 00:15:11.980 Number of I/O Submission Queues: 127 00:15:11.980 Number of I/O Completion Queues: 127 00:15:11.980 00:15:11.980 Active Namespaces 00:15:11.980 ================= 00:15:11.980 Namespace ID:1 00:15:11.980 Error Recovery Timeout: Unlimited 00:15:11.980 Command Set Identifier: NVM (00h) 00:15:11.980 Deallocate: Supported 00:15:11.980 Deallocated/Unwritten Error: Not Supported 00:15:11.980 Deallocated Read Value: Unknown 00:15:11.980 Deallocate in Write Zeroes: Not Supported 00:15:11.980 Deallocated Guard Field: 0xFFFF 00:15:11.980 Flush: Supported 00:15:11.980 Reservation: Supported 00:15:11.980 Namespace Sharing Capabilities: Multiple Controllers 00:15:11.980 Size (in LBAs): 131072 (0GiB) 00:15:11.980 Capacity (in LBAs): 131072 (0GiB) 00:15:11.980 Utilization (in LBAs): 131072 (0GiB) 00:15:11.980 NGUID: C8B36FF2270C4E7CB1322C5FE2DFA11B 00:15:11.980 UUID: c8b36ff2-270c-4e7c-b132-2c5fe2dfa11b 00:15:11.980 Thin Provisioning: Not Supported 00:15:11.980 Per-NS Atomic Units: Yes 00:15:11.980 Atomic Boundary Size (Normal): 0 00:15:11.980 Atomic Boundary Size (PFail): 0 00:15:11.980 Atomic Boundary Offset: 0 00:15:11.980 Maximum Single Source Range Length: 65535 00:15:11.980 Maximum Copy Length: 65535 00:15:11.980 Maximum Source Range Count: 1 00:15:11.980 NGUID/EUI64 Never Reused: No 00:15:11.980 Namespace Write Protected: No 00:15:11.980 Number of LBA Formats: 1 00:15:11.980 Current LBA Format: LBA Format #00 00:15:11.980 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:11.980 00:15:11.980 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:11.980 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.239 [2024-07-24 19:15:58.292704] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.509 Initializing NVMe Controllers 00:15:17.509 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.509 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:17.509 Initialization complete. Launching workers. 00:15:17.509 ======================================================== 00:15:17.509 Latency(us) 00:15:17.509 Device Information : IOPS MiB/s Average min max 00:15:17.509 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39964.05 156.11 3202.70 916.93 6715.07 00:15:17.509 ======================================================== 00:15:17.509 Total : 39964.05 156.11 3202.70 916.93 6715.07 00:15:17.509 00:15:17.509 [2024-07-24 19:16:03.398979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.510 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:17.510 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.510 [2024-07-24 19:16:03.613724] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.777 Initializing NVMe Controllers 00:15:22.777 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:22.777 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:22.777 Initialization complete. Launching workers. 00:15:22.777 ======================================================== 00:15:22.777 Latency(us) 00:15:22.777 Device Information : IOPS MiB/s Average min max 00:15:22.777 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39957.97 156.09 3203.20 942.17 9457.29 00:15:22.777 ======================================================== 00:15:22.777 Total : 39957.97 156.09 3203.20 942.17 9457.29 00:15:22.777 00:15:22.777 [2024-07-24 19:16:08.634849] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.777 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:22.777 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.777 [2024-07-24 19:16:08.852905] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.082 [2024-07-24 19:16:14.001814] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.082 Initializing NVMe Controllers 00:15:28.082 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.082 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.082 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:28.082 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:28.082 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:28.082 Initialization complete. Launching workers. 00:15:28.082 Starting thread on core 2 00:15:28.082 Starting thread on core 3 00:15:28.082 Starting thread on core 1 00:15:28.082 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:28.082 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.082 [2024-07-24 19:16:14.304183] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.377 [2024-07-24 19:16:17.364879] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.377 Initializing NVMe Controllers 00:15:31.377 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.377 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.377 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:31.377 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:31.377 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:31.377 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:31.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:31.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:31.377 Initialization complete. Launching workers. 00:15:31.377 Starting thread on core 1 with urgent priority queue 00:15:31.377 Starting thread on core 2 with urgent priority queue 00:15:31.377 Starting thread on core 3 with urgent priority queue 00:15:31.377 Starting thread on core 0 with urgent priority queue 00:15:31.377 SPDK bdev Controller (SPDK2 ) core 0: 9936.00 IO/s 10.06 secs/100000 ios 00:15:31.377 SPDK bdev Controller (SPDK2 ) core 1: 8611.00 IO/s 11.61 secs/100000 ios 00:15:31.377 SPDK bdev Controller (SPDK2 ) core 2: 7645.67 IO/s 13.08 secs/100000 ios 00:15:31.377 SPDK bdev Controller (SPDK2 ) core 3: 11076.00 IO/s 9.03 secs/100000 ios 00:15:31.377 ======================================================== 00:15:31.377 00:15:31.377 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:31.377 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.636 [2024-07-24 19:16:17.656144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.636 Initializing NVMe Controllers 00:15:31.636 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.636 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.636 Namespace ID: 1 size: 0GB 00:15:31.636 Initialization complete. 00:15:31.636 INFO: using host memory buffer for IO 00:15:31.636 Hello world! 00:15:31.636 [2024-07-24 19:16:17.668224] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.636 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:31.636 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.895 [2024-07-24 19:16:17.951928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:32.832 Initializing NVMe Controllers 00:15:32.832 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.832 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.832 Initialization complete. Launching workers. 00:15:32.832 submit (in ns) avg, min, max = 7482.8, 3026.4, 6990721.6 00:15:32.832 complete (in ns) avg, min, max = 18797.2, 1674.4, 3999034.4 00:15:32.832 00:15:32.832 Submit histogram 00:15:32.832 ================ 00:15:32.832 Range in us Cumulative Count 00:15:32.832 3.021 - 3.034: 0.0059% ( 1) 00:15:32.832 3.059 - 3.072: 0.0118% ( 1) 00:15:32.832 3.072 - 3.085: 0.0296% ( 3) 00:15:32.832 3.085 - 3.098: 0.1893% ( 27) 00:15:32.832 3.098 - 3.110: 0.8402% ( 110) 00:15:32.832 3.110 - 3.123: 1.9940% ( 195) 00:15:32.832 3.123 - 3.136: 4.1299% ( 361) 00:15:32.832 3.136 - 3.149: 7.3842% ( 550) 00:15:32.832 3.149 - 3.162: 11.7922% ( 745) 00:15:32.832 3.162 - 3.174: 17.0937% ( 896) 00:15:32.832 3.174 - 3.187: 22.6318% ( 936) 00:15:32.832 3.187 - 3.200: 28.3179% ( 961) 00:15:32.832 3.200 - 3.213: 34.2879% ( 1009) 00:15:32.832 3.213 - 3.226: 40.8023% ( 1101) 00:15:32.832 3.226 - 3.238: 47.5534% ( 1141) 00:15:32.832 3.238 - 3.251: 53.6595% ( 1032) 00:15:32.832 3.251 - 3.264: 57.6120% ( 668) 00:15:32.832 3.264 - 3.277: 61.1266% ( 594) 00:15:32.832 3.277 - 3.302: 68.1380% ( 1185) 00:15:32.832 3.302 - 3.328: 73.6406% ( 930) 00:15:32.832 3.328 - 3.354: 78.8533% ( 881) 00:15:32.832 3.354 - 3.379: 85.3322% ( 1095) 00:15:32.832 3.379 - 3.405: 87.7345% ( 406) 00:15:32.832 3.405 - 3.430: 88.7640% ( 174) 00:15:32.832 3.430 - 3.456: 89.4681% ( 119) 00:15:32.832 3.456 - 3.482: 90.6159% ( 194) 00:15:32.832 3.482 - 3.507: 92.0833% ( 248) 00:15:32.832 3.507 - 3.533: 93.9708% ( 319) 00:15:32.832 3.533 - 3.558: 95.3671% ( 236) 00:15:32.832 3.558 - 3.584: 96.5742% ( 204) 00:15:32.832 3.584 - 3.610: 97.5504% ( 165) 00:15:32.832 3.610 - 3.635: 98.4557% ( 153) 00:15:32.832 3.635 - 3.661: 98.9764% ( 88) 00:15:32.832 3.661 - 3.686: 99.3077% ( 56) 00:15:32.832 3.686 - 3.712: 99.4852% ( 30) 00:15:32.832 3.712 - 3.738: 99.5681% ( 14) 00:15:32.832 3.738 - 3.763: 99.6036% ( 6) 00:15:32.832 3.763 - 3.789: 99.6213% ( 3) 00:15:32.832 3.789 - 3.814: 99.6272% ( 1) 00:15:32.832 5.504 - 5.530: 99.6332% ( 1) 00:15:32.832 5.581 - 5.606: 99.6391% ( 1) 00:15:32.832 5.683 - 5.709: 99.6450% ( 1) 00:15:32.832 5.786 - 5.811: 99.6509% ( 1) 00:15:32.832 5.811 - 5.837: 99.6627% ( 2) 00:15:32.832 5.914 - 5.939: 99.6687% ( 1) 00:15:32.832 5.939 - 5.965: 99.6746% ( 1) 00:15:32.832 6.016 - 6.042: 99.6805% ( 1) 00:15:32.832 6.067 - 6.093: 99.6864% ( 1) 00:15:32.832 6.170 - 6.195: 99.6923% ( 1) 00:15:32.832 6.195 - 6.221: 99.6982% ( 1) 00:15:32.832 6.221 - 6.246: 99.7042% ( 1) 00:15:32.832 6.272 - 6.298: 99.7101% ( 1) 00:15:32.832 6.400 - 6.426: 99.7160% ( 1) 00:15:32.832 6.502 - 6.528: 99.7219% ( 1) 00:15:32.832 6.656 - 6.707: 99.7397% ( 3) 00:15:32.832 6.707 - 6.758: 99.7456% ( 1) 00:15:32.832 6.758 - 6.810: 99.7633% ( 3) 00:15:32.832 6.861 - 6.912: 99.7692% ( 1) 00:15:32.832 7.066 - 7.117: 99.7870% ( 3) 00:15:32.832 7.117 - 7.168: 99.7929% ( 1) 00:15:32.832 7.168 - 7.219: 99.7988% ( 1) 00:15:32.832 7.219 - 7.270: 99.8047% ( 1) 00:15:32.832 7.270 - 7.322: 99.8107% ( 1) 00:15:32.832 7.578 - 7.629: 99.8166% ( 1) 00:15:32.832 7.680 - 7.731: 99.8284% ( 2) 00:15:32.832 7.834 - 7.885: 99.8343% ( 1) 00:15:32.832 7.987 - 8.038: 99.8462% ( 2) 00:15:32.832 8.038 - 8.090: 99.8521% ( 1) 00:15:32.832 8.090 - 8.141: 99.8580% ( 1) 00:15:32.832 8.243 - 8.294: 99.8639% ( 1) 00:15:32.832 8.294 - 8.346: 99.8698% ( 1) 00:15:32.832 8.499 - 8.550: 99.8757% ( 1) 00:15:32.832 8.909 - 8.960: 99.8817% ( 1) 00:15:32.832 10.035 - 10.086: 99.8876% ( 1) 00:15:32.832 13.517 - 13.619: 99.8935% ( 1) 00:15:32.832 14.746 - 14.848: 99.8994% ( 1) 00:15:32.832 3984.589 - 4010.803: 99.9941% ( 16) 00:15:32.832 6973.030 - 7025.459: 100.0000% ( 1) 00:15:32.832 00:15:32.832 Complete histogram 00:15:32.832 ================== 00:15:32.832 Ra[2024-07-24 19:16:19.046540] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.093 nge in us Cumulative Count 00:15:33.093 1.664 - 1.677: 0.0118% ( 2) 00:15:33.093 1.677 - 1.690: 0.0296% ( 3) 00:15:33.093 1.690 - 1.702: 0.0473% ( 3) 00:15:33.093 1.702 - 1.715: 0.2485% ( 34) 00:15:33.093 1.715 - 1.728: 5.9405% ( 962) 00:15:33.093 1.728 - 1.741: 14.7210% ( 1484) 00:15:33.093 1.741 - 1.754: 17.4191% ( 456) 00:15:33.093 1.754 - 1.766: 20.7266% ( 559) 00:15:33.093 1.766 - 1.779: 57.9137% ( 6285) 00:15:33.093 1.779 - 1.792: 90.0420% ( 5430) 00:15:33.093 1.792 - 1.805: 95.3967% ( 905) 00:15:33.093 1.805 - 1.818: 97.5327% ( 361) 00:15:33.093 1.818 - 1.830: 97.9469% ( 70) 00:15:33.093 1.830 - 1.843: 98.2368% ( 49) 00:15:33.093 1.843 - 1.856: 98.7752% ( 91) 00:15:33.093 1.856 - 1.869: 99.1421% ( 62) 00:15:33.093 1.869 - 1.882: 99.2604% ( 20) 00:15:33.093 1.882 - 1.894: 99.3255% ( 11) 00:15:33.093 1.894 - 1.907: 99.3551% ( 5) 00:15:33.093 1.907 - 1.920: 99.3787% ( 4) 00:15:33.093 1.946 - 1.958: 99.3965% ( 3) 00:15:33.093 1.958 - 1.971: 99.4024% ( 1) 00:15:33.093 1.971 - 1.984: 99.4083% ( 1) 00:15:33.093 2.022 - 2.035: 99.4142% ( 1) 00:15:33.093 2.048 - 2.061: 99.4202% ( 1) 00:15:33.093 2.074 - 2.086: 99.4261% ( 1) 00:15:33.093 2.086 - 2.099: 99.4320% ( 1) 00:15:33.093 2.138 - 2.150: 99.4379% ( 1) 00:15:33.093 2.202 - 2.214: 99.4438% ( 1) 00:15:33.093 2.266 - 2.278: 99.4497% ( 1) 00:15:33.093 4.070 - 4.096: 99.4557% ( 1) 00:15:33.093 4.301 - 4.326: 99.4616% ( 1) 00:15:33.093 4.378 - 4.403: 99.4675% ( 1) 00:15:33.093 4.710 - 4.736: 99.4734% ( 1) 00:15:33.093 4.838 - 4.864: 99.4793% ( 1) 00:15:33.093 4.864 - 4.890: 99.4852% ( 1) 00:15:33.093 4.915 - 4.941: 99.4912% ( 1) 00:15:33.093 5.171 - 5.197: 99.4971% ( 1) 00:15:33.093 5.222 - 5.248: 99.5030% ( 1) 00:15:33.093 5.376 - 5.402: 99.5089% ( 1) 00:15:33.093 5.402 - 5.427: 99.5148% ( 1) 00:15:33.093 5.427 - 5.453: 99.5207% ( 1) 00:15:33.093 5.837 - 5.862: 99.5267% ( 1) 00:15:33.093 6.144 - 6.170: 99.5326% ( 1) 00:15:33.093 6.170 - 6.195: 99.5385% ( 1) 00:15:33.093 6.246 - 6.272: 99.5444% ( 1) 00:15:33.093 6.272 - 6.298: 99.5503% ( 1) 00:15:33.093 6.298 - 6.323: 99.5562% ( 1) 00:15:33.093 8.090 - 8.141: 99.5622% ( 1) 00:15:33.093 10.086 - 10.138: 99.5681% ( 1) 00:15:33.093 14.131 - 14.234: 99.5740% ( 1) 00:15:33.093 3984.589 - 4010.803: 100.0000% ( 72) 00:15:33.093 00:15:33.093 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.094 [ 00:15:33.094 { 00:15:33.094 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.094 "subtype": "Discovery", 00:15:33.094 "listen_addresses": [], 00:15:33.094 "allow_any_host": true, 00:15:33.094 "hosts": [] 00:15:33.094 }, 00:15:33.094 { 00:15:33.094 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.094 "subtype": "NVMe", 00:15:33.094 "listen_addresses": [ 00:15:33.094 { 00:15:33.094 "trtype": "VFIOUSER", 00:15:33.094 "adrfam": "IPv4", 00:15:33.094 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.094 "trsvcid": "0" 00:15:33.094 } 00:15:33.094 ], 00:15:33.094 "allow_any_host": true, 00:15:33.094 "hosts": [], 00:15:33.094 "serial_number": "SPDK1", 00:15:33.094 "model_number": "SPDK bdev Controller", 00:15:33.094 "max_namespaces": 32, 00:15:33.094 "min_cntlid": 1, 00:15:33.094 "max_cntlid": 65519, 00:15:33.094 "namespaces": [ 00:15:33.094 { 00:15:33.094 "nsid": 1, 00:15:33.094 "bdev_name": "Malloc1", 00:15:33.094 "name": "Malloc1", 00:15:33.094 "nguid": "2CB7DF5E1AC744FB8EC54E30FE3955F6", 00:15:33.094 "uuid": "2cb7df5e-1ac7-44fb-8ec5-4e30fe3955f6" 00:15:33.094 }, 00:15:33.094 { 00:15:33.094 "nsid": 2, 00:15:33.094 "bdev_name": "Malloc3", 00:15:33.094 "name": "Malloc3", 00:15:33.094 "nguid": "FC03878D36314923B1540872588A58FD", 00:15:33.094 "uuid": "fc03878d-3631-4923-b154-0872588a58fd" 00:15:33.094 } 00:15:33.094 ] 00:15:33.094 }, 00:15:33.094 { 00:15:33.094 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.094 "subtype": "NVMe", 00:15:33.094 "listen_addresses": [ 00:15:33.094 { 00:15:33.094 "trtype": "VFIOUSER", 00:15:33.094 "adrfam": "IPv4", 00:15:33.094 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.094 "trsvcid": "0" 00:15:33.094 } 00:15:33.094 ], 00:15:33.094 "allow_any_host": true, 00:15:33.094 "hosts": [], 00:15:33.094 "serial_number": "SPDK2", 00:15:33.094 "model_number": "SPDK bdev Controller", 00:15:33.094 "max_namespaces": 32, 00:15:33.094 "min_cntlid": 1, 00:15:33.094 "max_cntlid": 65519, 00:15:33.094 "namespaces": [ 00:15:33.094 { 00:15:33.094 "nsid": 1, 00:15:33.094 "bdev_name": "Malloc2", 00:15:33.094 "name": "Malloc2", 00:15:33.094 "nguid": "C8B36FF2270C4E7CB1322C5FE2DFA11B", 00:15:33.094 "uuid": "c8b36ff2-270c-4e7c-b132-2c5fe2dfa11b" 00:15:33.094 } 00:15:33.094 ] 00:15:33.094 } 00:15:33.094 ] 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1507464 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:33.094 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:33.094 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.353 [2024-07-24 19:16:19.440143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.353 Malloc4 00:15:33.353 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:33.611 [2024-07-24 19:16:19.634627] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.611 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.611 Asynchronous Event Request test 00:15:33.611 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.611 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.611 Registering asynchronous event callbacks... 00:15:33.611 Starting namespace attribute notice tests for all controllers... 00:15:33.611 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:33.611 aer_cb - Changed Namespace 00:15:33.611 Cleaning up... 00:15:33.611 [ 00:15:33.611 { 00:15:33.611 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.611 "subtype": "Discovery", 00:15:33.611 "listen_addresses": [], 00:15:33.611 "allow_any_host": true, 00:15:33.611 "hosts": [] 00:15:33.611 }, 00:15:33.611 { 00:15:33.611 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.611 "subtype": "NVMe", 00:15:33.611 "listen_addresses": [ 00:15:33.611 { 00:15:33.611 "trtype": "VFIOUSER", 00:15:33.611 "adrfam": "IPv4", 00:15:33.611 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.611 "trsvcid": "0" 00:15:33.611 } 00:15:33.611 ], 00:15:33.611 "allow_any_host": true, 00:15:33.611 "hosts": [], 00:15:33.611 "serial_number": "SPDK1", 00:15:33.611 "model_number": "SPDK bdev Controller", 00:15:33.611 "max_namespaces": 32, 00:15:33.611 "min_cntlid": 1, 00:15:33.611 "max_cntlid": 65519, 00:15:33.611 "namespaces": [ 00:15:33.611 { 00:15:33.611 "nsid": 1, 00:15:33.611 "bdev_name": "Malloc1", 00:15:33.611 "name": "Malloc1", 00:15:33.611 "nguid": "2CB7DF5E1AC744FB8EC54E30FE3955F6", 00:15:33.611 "uuid": "2cb7df5e-1ac7-44fb-8ec5-4e30fe3955f6" 00:15:33.611 }, 00:15:33.611 { 00:15:33.611 "nsid": 2, 00:15:33.611 "bdev_name": "Malloc3", 00:15:33.611 "name": "Malloc3", 00:15:33.611 "nguid": "FC03878D36314923B1540872588A58FD", 00:15:33.611 "uuid": "fc03878d-3631-4923-b154-0872588a58fd" 00:15:33.611 } 00:15:33.611 ] 00:15:33.611 }, 00:15:33.611 { 00:15:33.612 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.612 "subtype": "NVMe", 00:15:33.612 "listen_addresses": [ 00:15:33.612 { 00:15:33.612 "trtype": "VFIOUSER", 00:15:33.612 "adrfam": "IPv4", 00:15:33.612 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.612 "trsvcid": "0" 00:15:33.612 } 00:15:33.612 ], 00:15:33.612 "allow_any_host": true, 00:15:33.612 "hosts": [], 00:15:33.612 "serial_number": "SPDK2", 00:15:33.612 "model_number": "SPDK bdev Controller", 00:15:33.612 "max_namespaces": 32, 00:15:33.612 "min_cntlid": 1, 00:15:33.612 "max_cntlid": 65519, 00:15:33.612 "namespaces": [ 00:15:33.612 { 00:15:33.612 "nsid": 1, 00:15:33.612 "bdev_name": "Malloc2", 00:15:33.612 "name": "Malloc2", 00:15:33.612 "nguid": "C8B36FF2270C4E7CB1322C5FE2DFA11B", 00:15:33.612 "uuid": "c8b36ff2-270c-4e7c-b132-2c5fe2dfa11b" 00:15:33.612 }, 00:15:33.612 { 00:15:33.612 "nsid": 2, 00:15:33.612 "bdev_name": "Malloc4", 00:15:33.612 "name": "Malloc4", 00:15:33.612 "nguid": "539C15DAADEA4C6D9DE8BA9617E6A222", 00:15:33.612 "uuid": "539c15da-adea-4c6d-9de8-ba9617e6a222" 00:15:33.612 } 00:15:33.612 ] 00:15:33.612 } 00:15:33.612 ] 00:15:33.612 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1507464 00:15:33.612 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:33.612 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1499719 00:15:33.612 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1499719 ']' 00:15:33.612 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1499719 00:15:33.870 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:33.870 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.870 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1499719 00:15:33.870 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:33.870 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:33.870 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1499719' 00:15:33.870 killing process with pid 1499719 00:15:33.870 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1499719 00:15:33.870 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1499719 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1507729 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1507729' 00:15:34.129 Process pid: 1507729 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1507729 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1507729 ']' 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.129 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:34.129 [2024-07-24 19:16:20.218287] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:34.129 [2024-07-24 19:16:20.219157] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:34.129 [2024-07-24 19:16:20.219197] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.129 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.129 [2024-07-24 19:16:20.289056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.129 [2024-07-24 19:16:20.365468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.129 [2024-07-24 19:16:20.365507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.130 [2024-07-24 19:16:20.365517] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.130 [2024-07-24 19:16:20.365525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.130 [2024-07-24 19:16:20.365532] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.130 [2024-07-24 19:16:20.365582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.130 [2024-07-24 19:16:20.365678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.130 [2024-07-24 19:16:20.365764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.130 [2024-07-24 19:16:20.365766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.388 [2024-07-24 19:16:20.445375] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:34.388 [2024-07-24 19:16:20.445519] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:34.388 [2024-07-24 19:16:20.445734] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:34.388 [2024-07-24 19:16:20.446015] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:34.388 [2024-07-24 19:16:20.446231] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:34.955 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.955 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:34.955 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:35.890 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:36.149 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:36.149 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:36.149 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.149 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:36.149 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:36.149 Malloc1 00:15:36.408 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:36.408 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:36.666 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:36.926 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.926 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:36.926 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:36.926 Malloc2 00:15:36.926 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:37.184 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:37.442 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:37.700 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:37.700 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1507729 00:15:37.700 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1507729 ']' 00:15:37.700 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1507729 00:15:37.700 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:37.700 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.700 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1507729 00:15:37.700 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:37.700 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:37.700 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1507729' 00:15:37.700 killing process with pid 1507729 00:15:37.700 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1507729 00:15:37.700 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1507729 00:15:37.960 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:37.960 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:37.960 00:15:37.960 real 0m51.479s 00:15:37.960 user 3m22.581s 00:15:37.960 sys 0m4.793s 00:15:37.960 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:37.960 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:37.960 ************************************ 00:15:37.960 END TEST nvmf_vfio_user 00:15:37.960 ************************************ 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.960 ************************************ 00:15:37.960 START TEST nvmf_vfio_user_nvme_compliance 00:15:37.960 ************************************ 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:37.960 * Looking for test storage... 00:15:37.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.960 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1508342 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1508342' 00:15:37.961 Process pid: 1508342 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1508342 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1508342 ']' 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.961 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:38.221 [2024-07-24 19:16:24.225800] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:38.221 [2024-07-24 19:16:24.225859] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.221 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.221 [2024-07-24 19:16:24.296630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:38.221 [2024-07-24 19:16:24.365881] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.221 [2024-07-24 19:16:24.365924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.221 [2024-07-24 19:16:24.365933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.221 [2024-07-24 19:16:24.365942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.221 [2024-07-24 19:16:24.365949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.221 [2024-07-24 19:16:24.366007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.221 [2024-07-24 19:16:24.366100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.221 [2024-07-24 19:16:24.366102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.789 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:38.789 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:38.789 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.167 malloc0 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.167 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:40.167 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.167 00:15:40.167 00:15:40.167 CUnit - A unit testing framework for C - Version 2.1-3 00:15:40.167 http://cunit.sourceforge.net/ 00:15:40.167 00:15:40.167 00:15:40.167 Suite: nvme_compliance 00:15:40.167 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 19:16:26.270138] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.167 [2024-07-24 19:16:26.271493] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:40.167 [2024-07-24 19:16:26.271510] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:40.167 [2024-07-24 19:16:26.271518] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:40.167 [2024-07-24 19:16:26.273166] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.167 passed 00:15:40.167 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 19:16:26.353740] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.167 [2024-07-24 19:16:26.356759] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.168 passed 00:15:40.427 Test: admin_identify_ns ...[2024-07-24 19:16:26.433531] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.427 [2024-07-24 19:16:26.492730] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:40.427 [2024-07-24 19:16:26.500727] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:40.427 [2024-07-24 19:16:26.521822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.427 passed 00:15:40.427 Test: admin_get_features_mandatory_features ...[2024-07-24 19:16:26.597092] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.427 [2024-07-24 19:16:26.600108] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.427 passed 00:15:40.707 Test: admin_get_features_optional_features ...[2024-07-24 19:16:26.676578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.707 [2024-07-24 19:16:26.679601] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.707 passed 00:15:40.707 Test: admin_set_features_number_of_queues ...[2024-07-24 19:16:26.755140] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.707 [2024-07-24 19:16:26.859804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.707 passed 00:15:40.707 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 19:16:26.932221] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.707 [2024-07-24 19:16:26.935241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.965 passed 00:15:40.965 Test: admin_get_log_page_with_lpo ...[2024-07-24 19:16:27.012767] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.965 [2024-07-24 19:16:27.081723] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:40.965 [2024-07-24 19:16:27.094785] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.965 passed 00:15:40.965 Test: fabric_property_get ...[2024-07-24 19:16:27.168197] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.965 [2024-07-24 19:16:27.169429] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:40.965 [2024-07-24 19:16:27.171216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.965 passed 00:15:41.224 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 19:16:27.248722] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.224 [2024-07-24 19:16:27.249975] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:41.224 [2024-07-24 19:16:27.251746] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.224 passed 00:15:41.224 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 19:16:27.326775] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.224 [2024-07-24 19:16:27.411734] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.224 [2024-07-24 19:16:27.427722] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.225 [2024-07-24 19:16:27.432820] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.225 passed 00:15:41.483 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 19:16:27.506110] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.483 [2024-07-24 19:16:27.507344] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:41.483 [2024-07-24 19:16:27.509134] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.483 passed 00:15:41.483 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 19:16:27.585619] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.483 [2024-07-24 19:16:27.662725] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:41.483 [2024-07-24 19:16:27.686727] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.483 [2024-07-24 19:16:27.691799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.483 passed 00:15:41.743 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 19:16:27.764310] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.743 [2024-07-24 19:16:27.765529] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:41.743 [2024-07-24 19:16:27.765555] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:41.743 [2024-07-24 19:16:27.767329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.743 passed 00:15:41.743 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 19:16:27.844883] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.743 [2024-07-24 19:16:27.936725] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:41.743 [2024-07-24 19:16:27.944732] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:41.743 [2024-07-24 19:16:27.952725] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:41.743 [2024-07-24 19:16:27.960723] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:42.002 [2024-07-24 19:16:27.989801] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.002 passed 00:15:42.002 Test: admin_create_io_sq_verify_pc ...[2024-07-24 19:16:28.063313] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.002 [2024-07-24 19:16:28.078729] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:42.002 [2024-07-24 19:16:28.096484] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.002 passed 00:15:42.002 Test: admin_create_io_qp_max_qps ...[2024-07-24 19:16:28.172962] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.379 [2024-07-24 19:16:29.280728] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:43.637 [2024-07-24 19:16:29.662358] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.637 passed 00:15:43.637 Test: admin_create_io_sq_shared_cq ...[2024-07-24 19:16:29.738806] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.637 [2024-07-24 19:16:29.871721] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:43.896 [2024-07-24 19:16:29.908777] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.896 passed 00:15:43.896 00:15:43.896 Run Summary: Type Total Ran Passed Failed Inactive 00:15:43.896 suites 1 1 n/a 0 0 00:15:43.896 tests 18 18 18 0 0 00:15:43.896 asserts 360 360 360 0 n/a 00:15:43.896 00:15:43.896 Elapsed time = 1.496 seconds 00:15:43.896 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1508342 00:15:43.896 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1508342 ']' 00:15:43.896 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1508342 00:15:43.896 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:43.896 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:43.896 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1508342 00:15:43.896 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:43.896 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:43.896 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1508342' 00:15:43.896 killing process with pid 1508342 00:15:43.896 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1508342 00:15:43.896 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1508342 00:15:44.156 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:44.156 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:44.156 00:15:44.156 real 0m6.167s 00:15:44.156 user 0m17.350s 00:15:44.156 sys 0m0.716s 00:15:44.156 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:44.156 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.156 ************************************ 00:15:44.156 END TEST nvmf_vfio_user_nvme_compliance 00:15:44.156 ************************************ 00:15:44.156 19:16:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:44.156 19:16:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:44.156 19:16:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:44.156 19:16:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:44.156 ************************************ 00:15:44.156 START TEST nvmf_vfio_user_fuzz 00:15:44.156 ************************************ 00:15:44.156 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:44.156 * Looking for test storage... 00:15:44.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.416 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1509460 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1509460' 00:15:44.417 Process pid: 1509460 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1509460 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1509460 ']' 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:44.417 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.356 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:45.356 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:45.356 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.294 malloc0 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:46.294 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:18.414 Fuzzing completed. Shutting down the fuzz application 00:16:18.414 00:16:18.414 Dumping successful admin opcodes: 00:16:18.414 8, 9, 10, 24, 00:16:18.414 Dumping successful io opcodes: 00:16:18.414 0, 00:16:18.415 NS: 0x200003a1ef00 I/O qp, Total commands completed: 910921, total successful commands: 3555, random_seed: 1373252224 00:16:18.415 NS: 0x200003a1ef00 admin qp, Total commands completed: 224756, total successful commands: 1808, random_seed: 3007071552 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1509460 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1509460 ']' 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1509460 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1509460 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1509460' 00:16:18.415 killing process with pid 1509460 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1509460 00:16:18.415 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1509460 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:18.415 00:16:18.415 real 0m32.804s 00:16:18.415 user 0m29.416s 00:16:18.415 sys 0m33.061s 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.415 ************************************ 00:16:18.415 END TEST nvmf_vfio_user_fuzz 00:16:18.415 ************************************ 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:18.415 ************************************ 00:16:18.415 START TEST nvmf_auth_target 00:16:18.415 ************************************ 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:18.415 * Looking for test storage... 00:16:18.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:18.415 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:18.416 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:18.416 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.416 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.416 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.416 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:18.416 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:18.416 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:18.416 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:23.688 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:23.688 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.688 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:23.688 Found net devices under 0000:af:00.0: cvl_0_0 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:23.689 Found net devices under 0000:af:00.1: cvl_0_1 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:23.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:16:23.689 00:16:23.689 --- 10.0.0.2 ping statistics --- 00:16:23.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.689 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:16:23.689 00:16:23.689 --- 10.0.0.1 ping statistics --- 00:16:23.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.689 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.689 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1518192 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1518192 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1518192 ']' 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:23.949 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1518364 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3baee6c3063a5cb39516fae252b716897e4a4967f776b083 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aPX 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3baee6c3063a5cb39516fae252b716897e4a4967f776b083 0 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3baee6c3063a5cb39516fae252b716897e4a4967f776b083 0 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3baee6c3063a5cb39516fae252b716897e4a4967f776b083 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aPX 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aPX 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.aPX 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0e82f2e74dc72a8523b0bd426f0b8c4e287f3471a3c01e9f303805065395f8fc 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1VY 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0e82f2e74dc72a8523b0bd426f0b8c4e287f3471a3c01e9f303805065395f8fc 3 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0e82f2e74dc72a8523b0bd426f0b8c4e287f3471a3c01e9f303805065395f8fc 3 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.924 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0e82f2e74dc72a8523b0bd426f0b8c4e287f3471a3c01e9f303805065395f8fc 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1VY 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1VY 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.1VY 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d43e5bca9dc0a94e22083503a3a25915 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.W7c 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d43e5bca9dc0a94e22083503a3a25915 1 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d43e5bca9dc0a94e22083503a3a25915 1 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d43e5bca9dc0a94e22083503a3a25915 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:24.925 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.W7c 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.W7c 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.W7c 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=52e6ea860adc630eb33f0a8164a8741f8deb1f23e80283d5 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wQ4 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 52e6ea860adc630eb33f0a8164a8741f8deb1f23e80283d5 2 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 52e6ea860adc630eb33f0a8164a8741f8deb1f23e80283d5 2 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=52e6ea860adc630eb33f0a8164a8741f8deb1f23e80283d5 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wQ4 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wQ4 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.wQ4 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2a8e559e706c5e2fa416dc861fff6db1cb3ef71e959bfa00 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZdN 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2a8e559e706c5e2fa416dc861fff6db1cb3ef71e959bfa00 2 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2a8e559e706c5e2fa416dc861fff6db1cb3ef71e959bfa00 2 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2a8e559e706c5e2fa416dc861fff6db1cb3ef71e959bfa00 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:24.925 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZdN 00:16:25.183 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZdN 00:16:25.183 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ZdN 00:16:25.183 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:25.183 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.183 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.183 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.183 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0a529ae2a506e03bd1f7669ae574a908 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bhh 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0a529ae2a506e03bd1f7669ae574a908 1 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0a529ae2a506e03bd1f7669ae574a908 1 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0a529ae2a506e03bd1f7669ae574a908 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bhh 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bhh 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.bhh 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a31eb62aba6a2505b3d1c7d1b4bca3815fb653615d09f3b3dc0ed5e52406310f 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.9sT 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a31eb62aba6a2505b3d1c7d1b4bca3815fb653615d09f3b3dc0ed5e52406310f 3 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a31eb62aba6a2505b3d1c7d1b4bca3815fb653615d09f3b3dc0ed5e52406310f 3 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a31eb62aba6a2505b3d1c7d1b4bca3815fb653615d09f3b3dc0ed5e52406310f 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.9sT 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.9sT 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.9sT 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1518192 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1518192 ']' 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.184 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1518364 /var/tmp/host.sock 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1518364 ']' 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:25.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.443 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aPX 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.aPX 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.aPX 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.1VY ]] 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1VY 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1VY 00:16:25.702 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1VY 00:16:25.961 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:25.961 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.W7c 00:16:25.961 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.961 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.961 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.961 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.W7c 00:16:25.961 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.W7c 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.wQ4 ]] 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wQ4 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wQ4 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wQ4 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZdN 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ZdN 00:16:26.221 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ZdN 00:16:26.480 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.bhh ]] 00:16:26.480 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bhh 00:16:26.480 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.480 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.480 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.480 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bhh 00:16:26.480 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bhh 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.9sT 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.9sT 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.9sT 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.739 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.999 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:26.999 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.999 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.999 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:26.999 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:26.999 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.999 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.999 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.999 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.999 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.999 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.999 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.258 00:16:27.258 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.258 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.258 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.516 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.516 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.516 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.516 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.516 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.516 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.516 { 00:16:27.516 "cntlid": 1, 00:16:27.516 "qid": 0, 00:16:27.516 "state": "enabled", 00:16:27.516 "thread": "nvmf_tgt_poll_group_000", 00:16:27.516 "listen_address": { 00:16:27.516 "trtype": "TCP", 00:16:27.516 "adrfam": "IPv4", 00:16:27.516 "traddr": "10.0.0.2", 00:16:27.516 "trsvcid": "4420" 00:16:27.516 }, 00:16:27.516 "peer_address": { 00:16:27.516 "trtype": "TCP", 00:16:27.516 "adrfam": "IPv4", 00:16:27.516 "traddr": "10.0.0.1", 00:16:27.516 "trsvcid": "55086" 00:16:27.516 }, 00:16:27.516 "auth": { 00:16:27.516 "state": "completed", 00:16:27.516 "digest": "sha256", 00:16:27.516 "dhgroup": "null" 00:16:27.516 } 00:16:27.516 } 00:16:27.516 ]' 00:16:27.517 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.517 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.517 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.517 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:27.517 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.517 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.517 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.517 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.775 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:16:28.342 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.342 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:28.342 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.342 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.342 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.342 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.342 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.342 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.601 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:28.601 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.602 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.602 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.861 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.861 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.861 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.861 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.861 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.861 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.861 { 00:16:28.861 "cntlid": 3, 00:16:28.861 "qid": 0, 00:16:28.861 "state": "enabled", 00:16:28.861 "thread": "nvmf_tgt_poll_group_000", 00:16:28.861 "listen_address": { 00:16:28.861 "trtype": "TCP", 00:16:28.861 "adrfam": "IPv4", 00:16:28.861 "traddr": "10.0.0.2", 00:16:28.861 "trsvcid": "4420" 00:16:28.861 }, 00:16:28.861 "peer_address": { 00:16:28.861 "trtype": "TCP", 00:16:28.861 "adrfam": "IPv4", 00:16:28.861 "traddr": "10.0.0.1", 00:16:28.861 "trsvcid": "55110" 00:16:28.861 }, 00:16:28.861 "auth": { 00:16:28.861 "state": "completed", 00:16:28.861 "digest": "sha256", 00:16:28.861 "dhgroup": "null" 00:16:28.861 } 00:16:28.861 } 00:16:28.861 ]' 00:16:28.861 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.861 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.861 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.120 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:29.120 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.120 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.120 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.120 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.120 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:16:29.687 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.687 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:29.687 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.687 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.687 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.687 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.687 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.687 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:29.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:29.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:29.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.946 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.946 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.946 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.946 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.946 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.946 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.204 00:16:30.204 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.204 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.204 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.463 { 00:16:30.463 "cntlid": 5, 00:16:30.463 "qid": 0, 00:16:30.463 "state": "enabled", 00:16:30.463 "thread": "nvmf_tgt_poll_group_000", 00:16:30.463 "listen_address": { 00:16:30.463 "trtype": "TCP", 00:16:30.463 "adrfam": "IPv4", 00:16:30.463 "traddr": "10.0.0.2", 00:16:30.463 "trsvcid": "4420" 00:16:30.463 }, 00:16:30.463 "peer_address": { 00:16:30.463 "trtype": "TCP", 00:16:30.463 "adrfam": "IPv4", 00:16:30.463 "traddr": "10.0.0.1", 00:16:30.463 "trsvcid": "55028" 00:16:30.463 }, 00:16:30.463 "auth": { 00:16:30.463 "state": "completed", 00:16:30.463 "digest": "sha256", 00:16:30.463 "dhgroup": "null" 00:16:30.463 } 00:16:30.463 } 00:16:30.463 ]' 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.463 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.722 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:16:31.290 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.290 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:31.290 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.290 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.290 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.290 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.290 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:31.290 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.550 00:16:31.550 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.809 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.809 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.809 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.809 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.809 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.809 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.809 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.809 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.809 { 00:16:31.809 "cntlid": 7, 00:16:31.809 "qid": 0, 00:16:31.809 "state": "enabled", 00:16:31.809 "thread": "nvmf_tgt_poll_group_000", 00:16:31.809 "listen_address": { 00:16:31.809 "trtype": "TCP", 00:16:31.809 "adrfam": "IPv4", 00:16:31.809 "traddr": "10.0.0.2", 00:16:31.809 "trsvcid": "4420" 00:16:31.809 }, 00:16:31.809 "peer_address": { 00:16:31.809 "trtype": "TCP", 00:16:31.809 "adrfam": "IPv4", 00:16:31.809 "traddr": "10.0.0.1", 00:16:31.809 "trsvcid": "55048" 00:16:31.809 }, 00:16:31.809 "auth": { 00:16:31.809 "state": "completed", 00:16:31.809 "digest": "sha256", 00:16:31.809 "dhgroup": "null" 00:16:31.809 } 00:16:31.809 } 00:16:31.809 ]' 00:16:31.809 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.809 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.809 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.068 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:32.069 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.069 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.069 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.069 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.069 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:16:32.637 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.637 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:32.637 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.637 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.637 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.637 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.637 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.637 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.637 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.897 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:32.897 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.897 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.897 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:32.897 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:32.897 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.897 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.897 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.897 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.897 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.897 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.897 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.156 00:16:33.156 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.156 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.156 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.416 { 00:16:33.416 "cntlid": 9, 00:16:33.416 "qid": 0, 00:16:33.416 "state": "enabled", 00:16:33.416 "thread": "nvmf_tgt_poll_group_000", 00:16:33.416 "listen_address": { 00:16:33.416 "trtype": "TCP", 00:16:33.416 "adrfam": "IPv4", 00:16:33.416 "traddr": "10.0.0.2", 00:16:33.416 "trsvcid": "4420" 00:16:33.416 }, 00:16:33.416 "peer_address": { 00:16:33.416 "trtype": "TCP", 00:16:33.416 "adrfam": "IPv4", 00:16:33.416 "traddr": "10.0.0.1", 00:16:33.416 "trsvcid": "55082" 00:16:33.416 }, 00:16:33.416 "auth": { 00:16:33.416 "state": "completed", 00:16:33.416 "digest": "sha256", 00:16:33.416 "dhgroup": "ffdhe2048" 00:16:33.416 } 00:16:33.416 } 00:16:33.416 ]' 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.416 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.675 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:16:34.244 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.244 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:34.244 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.244 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.244 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.244 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.244 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.244 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.504 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.504 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.763 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.763 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.763 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.763 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.763 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.763 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.763 { 00:16:34.763 "cntlid": 11, 00:16:34.763 "qid": 0, 00:16:34.763 "state": "enabled", 00:16:34.763 "thread": "nvmf_tgt_poll_group_000", 00:16:34.763 "listen_address": { 00:16:34.763 "trtype": "TCP", 00:16:34.763 "adrfam": "IPv4", 00:16:34.763 "traddr": "10.0.0.2", 00:16:34.763 "trsvcid": "4420" 00:16:34.763 }, 00:16:34.763 "peer_address": { 00:16:34.763 "trtype": "TCP", 00:16:34.763 "adrfam": "IPv4", 00:16:34.763 "traddr": "10.0.0.1", 00:16:34.763 "trsvcid": "55104" 00:16:34.763 }, 00:16:34.763 "auth": { 00:16:34.763 "state": "completed", 00:16:34.763 "digest": "sha256", 00:16:34.763 "dhgroup": "ffdhe2048" 00:16:34.763 } 00:16:34.763 } 00:16:34.763 ]' 00:16:34.763 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.763 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.763 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.022 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.022 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.023 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.023 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.023 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.023 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:16:35.591 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.591 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:35.591 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.591 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.591 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.591 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.591 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.591 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.851 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:35.851 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.851 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.851 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:35.851 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.851 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.851 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.851 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.851 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.851 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.851 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.851 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.110 00:16:36.110 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.110 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.110 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.369 { 00:16:36.369 "cntlid": 13, 00:16:36.369 "qid": 0, 00:16:36.369 "state": "enabled", 00:16:36.369 "thread": "nvmf_tgt_poll_group_000", 00:16:36.369 "listen_address": { 00:16:36.369 "trtype": "TCP", 00:16:36.369 "adrfam": "IPv4", 00:16:36.369 "traddr": "10.0.0.2", 00:16:36.369 "trsvcid": "4420" 00:16:36.369 }, 00:16:36.369 "peer_address": { 00:16:36.369 "trtype": "TCP", 00:16:36.369 "adrfam": "IPv4", 00:16:36.369 "traddr": "10.0.0.1", 00:16:36.369 "trsvcid": "55120" 00:16:36.369 }, 00:16:36.369 "auth": { 00:16:36.369 "state": "completed", 00:16:36.369 "digest": "sha256", 00:16:36.369 "dhgroup": "ffdhe2048" 00:16:36.369 } 00:16:36.369 } 00:16:36.369 ]' 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.369 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.628 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.248 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.514 00:16:37.514 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.514 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.514 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.773 { 00:16:37.773 "cntlid": 15, 00:16:37.773 "qid": 0, 00:16:37.773 "state": "enabled", 00:16:37.773 "thread": "nvmf_tgt_poll_group_000", 00:16:37.773 "listen_address": { 00:16:37.773 "trtype": "TCP", 00:16:37.773 "adrfam": "IPv4", 00:16:37.773 "traddr": "10.0.0.2", 00:16:37.773 "trsvcid": "4420" 00:16:37.773 }, 00:16:37.773 "peer_address": { 00:16:37.773 "trtype": "TCP", 00:16:37.773 "adrfam": "IPv4", 00:16:37.773 "traddr": "10.0.0.1", 00:16:37.773 "trsvcid": "55136" 00:16:37.773 }, 00:16:37.773 "auth": { 00:16:37.773 "state": "completed", 00:16:37.773 "digest": "sha256", 00:16:37.773 "dhgroup": "ffdhe2048" 00:16:37.773 } 00:16:37.773 } 00:16:37.773 ]' 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.773 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.033 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:16:38.601 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.601 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:38.601 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.601 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.601 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.601 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.601 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.601 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.601 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.860 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:38.860 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.860 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.860 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:38.860 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:38.860 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.860 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.860 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.860 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.860 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.860 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.860 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.119 00:16:39.119 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.119 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.119 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.119 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.119 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.119 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.119 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.119 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.119 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.119 { 00:16:39.119 "cntlid": 17, 00:16:39.119 "qid": 0, 00:16:39.119 "state": "enabled", 00:16:39.119 "thread": "nvmf_tgt_poll_group_000", 00:16:39.119 "listen_address": { 00:16:39.119 "trtype": "TCP", 00:16:39.119 "adrfam": "IPv4", 00:16:39.119 "traddr": "10.0.0.2", 00:16:39.119 "trsvcid": "4420" 00:16:39.119 }, 00:16:39.119 "peer_address": { 00:16:39.119 "trtype": "TCP", 00:16:39.119 "adrfam": "IPv4", 00:16:39.119 "traddr": "10.0.0.1", 00:16:39.119 "trsvcid": "55162" 00:16:39.119 }, 00:16:39.119 "auth": { 00:16:39.119 "state": "completed", 00:16:39.119 "digest": "sha256", 00:16:39.119 "dhgroup": "ffdhe3072" 00:16:39.119 } 00:16:39.119 } 00:16:39.119 ]' 00:16:39.119 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.378 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.378 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.378 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.378 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.378 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.378 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.379 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.638 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.207 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.466 00:16:40.466 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.466 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.466 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.725 { 00:16:40.725 "cntlid": 19, 00:16:40.725 "qid": 0, 00:16:40.725 "state": "enabled", 00:16:40.725 "thread": "nvmf_tgt_poll_group_000", 00:16:40.725 "listen_address": { 00:16:40.725 "trtype": "TCP", 00:16:40.725 "adrfam": "IPv4", 00:16:40.725 "traddr": "10.0.0.2", 00:16:40.725 "trsvcid": "4420" 00:16:40.725 }, 00:16:40.725 "peer_address": { 00:16:40.725 "trtype": "TCP", 00:16:40.725 "adrfam": "IPv4", 00:16:40.725 "traddr": "10.0.0.1", 00:16:40.725 "trsvcid": "36052" 00:16:40.725 }, 00:16:40.725 "auth": { 00:16:40.725 "state": "completed", 00:16:40.725 "digest": "sha256", 00:16:40.725 "dhgroup": "ffdhe3072" 00:16:40.725 } 00:16:40.725 } 00:16:40.725 ]' 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.725 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.984 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:16:41.551 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.551 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:41.551 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.551 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.551 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.551 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.551 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.551 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.810 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:41.810 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.810 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.810 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:41.810 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:41.810 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.810 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.810 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.810 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.810 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.810 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.810 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.070 00:16:42.070 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.070 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.070 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.070 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.070 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.070 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.070 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.070 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.070 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.070 { 00:16:42.070 "cntlid": 21, 00:16:42.070 "qid": 0, 00:16:42.070 "state": "enabled", 00:16:42.070 "thread": "nvmf_tgt_poll_group_000", 00:16:42.070 "listen_address": { 00:16:42.070 "trtype": "TCP", 00:16:42.070 "adrfam": "IPv4", 00:16:42.070 "traddr": "10.0.0.2", 00:16:42.070 "trsvcid": "4420" 00:16:42.070 }, 00:16:42.070 "peer_address": { 00:16:42.070 "trtype": "TCP", 00:16:42.070 "adrfam": "IPv4", 00:16:42.070 "traddr": "10.0.0.1", 00:16:42.070 "trsvcid": "36080" 00:16:42.070 }, 00:16:42.070 "auth": { 00:16:42.070 "state": "completed", 00:16:42.070 "digest": "sha256", 00:16:42.070 "dhgroup": "ffdhe3072" 00:16:42.070 } 00:16:42.070 } 00:16:42.070 ]' 00:16:42.070 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.329 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.329 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.329 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.329 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.329 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.329 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.329 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.587 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.157 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.415 00:16:43.415 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.415 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.415 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.674 { 00:16:43.674 "cntlid": 23, 00:16:43.674 "qid": 0, 00:16:43.674 "state": "enabled", 00:16:43.674 "thread": "nvmf_tgt_poll_group_000", 00:16:43.674 "listen_address": { 00:16:43.674 "trtype": "TCP", 00:16:43.674 "adrfam": "IPv4", 00:16:43.674 "traddr": "10.0.0.2", 00:16:43.674 "trsvcid": "4420" 00:16:43.674 }, 00:16:43.674 "peer_address": { 00:16:43.674 "trtype": "TCP", 00:16:43.674 "adrfam": "IPv4", 00:16:43.674 "traddr": "10.0.0.1", 00:16:43.674 "trsvcid": "36110" 00:16:43.674 }, 00:16:43.674 "auth": { 00:16:43.674 "state": "completed", 00:16:43.674 "digest": "sha256", 00:16:43.674 "dhgroup": "ffdhe3072" 00:16:43.674 } 00:16:43.674 } 00:16:43.674 ]' 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.674 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.933 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:16:44.501 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.501 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:44.501 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.501 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.501 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.501 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.501 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.501 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.501 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.760 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:44.760 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.760 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.760 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:44.760 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:44.760 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.760 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.760 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.760 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.760 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.761 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.761 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.019 00:16:45.019 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.019 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.019 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.020 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.020 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.020 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.020 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.279 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.279 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.279 { 00:16:45.279 "cntlid": 25, 00:16:45.279 "qid": 0, 00:16:45.279 "state": "enabled", 00:16:45.279 "thread": "nvmf_tgt_poll_group_000", 00:16:45.279 "listen_address": { 00:16:45.279 "trtype": "TCP", 00:16:45.279 "adrfam": "IPv4", 00:16:45.279 "traddr": "10.0.0.2", 00:16:45.279 "trsvcid": "4420" 00:16:45.279 }, 00:16:45.279 "peer_address": { 00:16:45.279 "trtype": "TCP", 00:16:45.279 "adrfam": "IPv4", 00:16:45.279 "traddr": "10.0.0.1", 00:16:45.279 "trsvcid": "36134" 00:16:45.279 }, 00:16:45.279 "auth": { 00:16:45.279 "state": "completed", 00:16:45.279 "digest": "sha256", 00:16:45.279 "dhgroup": "ffdhe4096" 00:16:45.279 } 00:16:45.279 } 00:16:45.279 ]' 00:16:45.279 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.279 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.279 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.279 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.279 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.279 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.279 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.279 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.539 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.108 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.368 00:16:46.368 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.368 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.368 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.627 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.627 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.627 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.627 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.627 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.627 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.627 { 00:16:46.627 "cntlid": 27, 00:16:46.627 "qid": 0, 00:16:46.627 "state": "enabled", 00:16:46.627 "thread": "nvmf_tgt_poll_group_000", 00:16:46.627 "listen_address": { 00:16:46.627 "trtype": "TCP", 00:16:46.627 "adrfam": "IPv4", 00:16:46.627 "traddr": "10.0.0.2", 00:16:46.627 "trsvcid": "4420" 00:16:46.627 }, 00:16:46.627 "peer_address": { 00:16:46.627 "trtype": "TCP", 00:16:46.627 "adrfam": "IPv4", 00:16:46.627 "traddr": "10.0.0.1", 00:16:46.627 "trsvcid": "36170" 00:16:46.627 }, 00:16:46.627 "auth": { 00:16:46.627 "state": "completed", 00:16:46.627 "digest": "sha256", 00:16:46.627 "dhgroup": "ffdhe4096" 00:16:46.627 } 00:16:46.627 } 00:16:46.627 ]' 00:16:46.627 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.627 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.627 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.627 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.627 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.887 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.887 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.887 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.887 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:16:47.455 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.455 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:47.455 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.455 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.455 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.455 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.455 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.455 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.714 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:47.714 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.714 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.714 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:47.714 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:47.714 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.714 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.714 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.714 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.714 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.714 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.714 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.973 00:16:47.973 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.973 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.973 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.232 { 00:16:48.232 "cntlid": 29, 00:16:48.232 "qid": 0, 00:16:48.232 "state": "enabled", 00:16:48.232 "thread": "nvmf_tgt_poll_group_000", 00:16:48.232 "listen_address": { 00:16:48.232 "trtype": "TCP", 00:16:48.232 "adrfam": "IPv4", 00:16:48.232 "traddr": "10.0.0.2", 00:16:48.232 "trsvcid": "4420" 00:16:48.232 }, 00:16:48.232 "peer_address": { 00:16:48.232 "trtype": "TCP", 00:16:48.232 "adrfam": "IPv4", 00:16:48.232 "traddr": "10.0.0.1", 00:16:48.232 "trsvcid": "36182" 00:16:48.232 }, 00:16:48.232 "auth": { 00:16:48.232 "state": "completed", 00:16:48.232 "digest": "sha256", 00:16:48.232 "dhgroup": "ffdhe4096" 00:16:48.232 } 00:16:48.232 } 00:16:48.232 ]' 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.232 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.492 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:16:49.060 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.060 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:49.060 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.060 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.060 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.060 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.060 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.060 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.060 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:49.061 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.061 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.061 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:49.061 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.061 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.061 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:16:49.061 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.061 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.061 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.061 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.061 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.321 00:16:49.321 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.321 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.321 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.581 { 00:16:49.581 "cntlid": 31, 00:16:49.581 "qid": 0, 00:16:49.581 "state": "enabled", 00:16:49.581 "thread": "nvmf_tgt_poll_group_000", 00:16:49.581 "listen_address": { 00:16:49.581 "trtype": "TCP", 00:16:49.581 "adrfam": "IPv4", 00:16:49.581 "traddr": "10.0.0.2", 00:16:49.581 "trsvcid": "4420" 00:16:49.581 }, 00:16:49.581 "peer_address": { 00:16:49.581 "trtype": "TCP", 00:16:49.581 "adrfam": "IPv4", 00:16:49.581 "traddr": "10.0.0.1", 00:16:49.581 "trsvcid": "36202" 00:16:49.581 }, 00:16:49.581 "auth": { 00:16:49.581 "state": "completed", 00:16:49.581 "digest": "sha256", 00:16:49.581 "dhgroup": "ffdhe4096" 00:16:49.581 } 00:16:49.581 } 00:16:49.581 ]' 00:16:49.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.581 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.847 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.847 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.847 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.847 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.847 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.847 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:16:50.451 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.451 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:50.451 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.451 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.451 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.451 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.451 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.451 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.451 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.711 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:50.711 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.711 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.711 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:50.711 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:50.711 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.711 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.711 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.711 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.711 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.711 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.711 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.971 00:16:50.971 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.971 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.971 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.230 { 00:16:51.230 "cntlid": 33, 00:16:51.230 "qid": 0, 00:16:51.230 "state": "enabled", 00:16:51.230 "thread": "nvmf_tgt_poll_group_000", 00:16:51.230 "listen_address": { 00:16:51.230 "trtype": "TCP", 00:16:51.230 "adrfam": "IPv4", 00:16:51.230 "traddr": "10.0.0.2", 00:16:51.230 "trsvcid": "4420" 00:16:51.230 }, 00:16:51.230 "peer_address": { 00:16:51.230 "trtype": "TCP", 00:16:51.230 "adrfam": "IPv4", 00:16:51.230 "traddr": "10.0.0.1", 00:16:51.230 "trsvcid": "49946" 00:16:51.230 }, 00:16:51.230 "auth": { 00:16:51.230 "state": "completed", 00:16:51.230 "digest": "sha256", 00:16:51.230 "dhgroup": "ffdhe6144" 00:16:51.230 } 00:16:51.230 } 00:16:51.230 ]' 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.230 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.490 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:16:52.058 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.058 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:52.058 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.058 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.058 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.058 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.058 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.058 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.317 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:52.317 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.317 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.317 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:52.317 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:52.317 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.317 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.317 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.317 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.317 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.317 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.317 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.576 00:16:52.576 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.576 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.576 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.835 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.836 { 00:16:52.836 "cntlid": 35, 00:16:52.836 "qid": 0, 00:16:52.836 "state": "enabled", 00:16:52.836 "thread": "nvmf_tgt_poll_group_000", 00:16:52.836 "listen_address": { 00:16:52.836 "trtype": "TCP", 00:16:52.836 "adrfam": "IPv4", 00:16:52.836 "traddr": "10.0.0.2", 00:16:52.836 "trsvcid": "4420" 00:16:52.836 }, 00:16:52.836 "peer_address": { 00:16:52.836 "trtype": "TCP", 00:16:52.836 "adrfam": "IPv4", 00:16:52.836 "traddr": "10.0.0.1", 00:16:52.836 "trsvcid": "49980" 00:16:52.836 }, 00:16:52.836 "auth": { 00:16:52.836 "state": "completed", 00:16:52.836 "digest": "sha256", 00:16:52.836 "dhgroup": "ffdhe6144" 00:16:52.836 } 00:16:52.836 } 00:16:52.836 ]' 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.836 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.095 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.664 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.233 00:16:54.233 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.233 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.233 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.233 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.233 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.233 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.233 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.233 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.233 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.233 { 00:16:54.233 "cntlid": 37, 00:16:54.233 "qid": 0, 00:16:54.233 "state": "enabled", 00:16:54.233 "thread": "nvmf_tgt_poll_group_000", 00:16:54.233 "listen_address": { 00:16:54.233 "trtype": "TCP", 00:16:54.233 "adrfam": "IPv4", 00:16:54.233 "traddr": "10.0.0.2", 00:16:54.233 "trsvcid": "4420" 00:16:54.233 }, 00:16:54.233 "peer_address": { 00:16:54.233 "trtype": "TCP", 00:16:54.233 "adrfam": "IPv4", 00:16:54.233 "traddr": "10.0.0.1", 00:16:54.233 "trsvcid": "50014" 00:16:54.233 }, 00:16:54.233 "auth": { 00:16:54.233 "state": "completed", 00:16:54.233 "digest": "sha256", 00:16:54.233 "dhgroup": "ffdhe6144" 00:16:54.233 } 00:16:54.233 } 00:16:54.233 ]' 00:16:54.233 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.233 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.233 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.492 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.492 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.492 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.492 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.492 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.492 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:16:55.061 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.061 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:55.061 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.061 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.061 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.061 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.061 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.061 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.320 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:55.320 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.320 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.320 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:55.320 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:55.320 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.320 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:16:55.320 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.320 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.320 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.320 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.320 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.580 00:16:55.580 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.580 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.580 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.839 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.839 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.839 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.839 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.839 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.839 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.839 { 00:16:55.839 "cntlid": 39, 00:16:55.839 "qid": 0, 00:16:55.839 "state": "enabled", 00:16:55.839 "thread": "nvmf_tgt_poll_group_000", 00:16:55.839 "listen_address": { 00:16:55.839 "trtype": "TCP", 00:16:55.839 "adrfam": "IPv4", 00:16:55.839 "traddr": "10.0.0.2", 00:16:55.839 "trsvcid": "4420" 00:16:55.839 }, 00:16:55.839 "peer_address": { 00:16:55.839 "trtype": "TCP", 00:16:55.839 "adrfam": "IPv4", 00:16:55.839 "traddr": "10.0.0.1", 00:16:55.839 "trsvcid": "50034" 00:16:55.839 }, 00:16:55.839 "auth": { 00:16:55.839 "state": "completed", 00:16:55.839 "digest": "sha256", 00:16:55.839 "dhgroup": "ffdhe6144" 00:16:55.839 } 00:16:55.839 } 00:16:55.839 ]' 00:16:55.839 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.839 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.839 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.839 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.839 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.839 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.839 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.839 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.099 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:16:56.667 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.667 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:56.667 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.667 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.667 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.667 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.667 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.667 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.667 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.925 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:56.925 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.925 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:56.925 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:56.925 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:56.925 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.925 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.925 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.925 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.925 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.925 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.925 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.183 00:16:57.441 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.441 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.441 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.441 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.441 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.441 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.441 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.441 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.441 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.441 { 00:16:57.441 "cntlid": 41, 00:16:57.441 "qid": 0, 00:16:57.441 "state": "enabled", 00:16:57.441 "thread": "nvmf_tgt_poll_group_000", 00:16:57.441 "listen_address": { 00:16:57.441 "trtype": "TCP", 00:16:57.441 "adrfam": "IPv4", 00:16:57.441 "traddr": "10.0.0.2", 00:16:57.441 "trsvcid": "4420" 00:16:57.441 }, 00:16:57.441 "peer_address": { 00:16:57.441 "trtype": "TCP", 00:16:57.441 "adrfam": "IPv4", 00:16:57.441 "traddr": "10.0.0.1", 00:16:57.441 "trsvcid": "50058" 00:16:57.441 }, 00:16:57.441 "auth": { 00:16:57.441 "state": "completed", 00:16:57.441 "digest": "sha256", 00:16:57.441 "dhgroup": "ffdhe8192" 00:16:57.441 } 00:16:57.441 } 00:16:57.441 ]' 00:16:57.441 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.441 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.441 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.700 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.700 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.700 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.700 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.700 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.700 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:16:58.268 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.268 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:58.268 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.268 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.268 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.269 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.269 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.269 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.528 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:58.528 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.528 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.528 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:58.528 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:58.528 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.528 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.528 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.528 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.528 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.528 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.528 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.095 00:16:59.095 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.095 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.095 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.095 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.096 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.096 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.096 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.096 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.096 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.096 { 00:16:59.096 "cntlid": 43, 00:16:59.096 "qid": 0, 00:16:59.096 "state": "enabled", 00:16:59.096 "thread": "nvmf_tgt_poll_group_000", 00:16:59.096 "listen_address": { 00:16:59.096 "trtype": "TCP", 00:16:59.096 "adrfam": "IPv4", 00:16:59.096 "traddr": "10.0.0.2", 00:16:59.096 "trsvcid": "4420" 00:16:59.096 }, 00:16:59.096 "peer_address": { 00:16:59.096 "trtype": "TCP", 00:16:59.096 "adrfam": "IPv4", 00:16:59.096 "traddr": "10.0.0.1", 00:16:59.096 "trsvcid": "50074" 00:16:59.096 }, 00:16:59.096 "auth": { 00:16:59.096 "state": "completed", 00:16:59.096 "digest": "sha256", 00:16:59.096 "dhgroup": "ffdhe8192" 00:16:59.096 } 00:16:59.096 } 00:16:59.096 ]' 00:16:59.096 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.353 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.353 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.353 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.353 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.353 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.353 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.353 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.611 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.179 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.746 00:17:00.746 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.746 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.746 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.746 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.746 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.746 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.746 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.006 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.006 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.006 { 00:17:01.006 "cntlid": 45, 00:17:01.006 "qid": 0, 00:17:01.006 "state": "enabled", 00:17:01.006 "thread": "nvmf_tgt_poll_group_000", 00:17:01.006 "listen_address": { 00:17:01.006 "trtype": "TCP", 00:17:01.006 "adrfam": "IPv4", 00:17:01.006 "traddr": "10.0.0.2", 00:17:01.006 "trsvcid": "4420" 00:17:01.006 }, 00:17:01.006 "peer_address": { 00:17:01.006 "trtype": "TCP", 00:17:01.006 "adrfam": "IPv4", 00:17:01.006 "traddr": "10.0.0.1", 00:17:01.006 "trsvcid": "59326" 00:17:01.006 }, 00:17:01.006 "auth": { 00:17:01.006 "state": "completed", 00:17:01.006 "digest": "sha256", 00:17:01.006 "dhgroup": "ffdhe8192" 00:17:01.006 } 00:17:01.006 } 00:17:01.006 ]' 00:17:01.006 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.006 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.006 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.006 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.006 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.006 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.006 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.006 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.265 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:17:01.833 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.833 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:01.833 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.833 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.833 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.833 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.833 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.833 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.833 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:01.833 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.833 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.833 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:01.833 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:01.833 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.833 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:01.833 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.833 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.833 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.833 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.833 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.401 00:17:02.401 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.401 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.401 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.660 { 00:17:02.660 "cntlid": 47, 00:17:02.660 "qid": 0, 00:17:02.660 "state": "enabled", 00:17:02.660 "thread": "nvmf_tgt_poll_group_000", 00:17:02.660 "listen_address": { 00:17:02.660 "trtype": "TCP", 00:17:02.660 "adrfam": "IPv4", 00:17:02.660 "traddr": "10.0.0.2", 00:17:02.660 "trsvcid": "4420" 00:17:02.660 }, 00:17:02.660 "peer_address": { 00:17:02.660 "trtype": "TCP", 00:17:02.660 "adrfam": "IPv4", 00:17:02.660 "traddr": "10.0.0.1", 00:17:02.660 "trsvcid": "59348" 00:17:02.660 }, 00:17:02.660 "auth": { 00:17:02.660 "state": "completed", 00:17:02.660 "digest": "sha256", 00:17:02.660 "dhgroup": "ffdhe8192" 00:17:02.660 } 00:17:02.660 } 00:17:02.660 ]' 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.660 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.920 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.528 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.787 00:17:03.787 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.787 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.787 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.046 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.047 { 00:17:04.047 "cntlid": 49, 00:17:04.047 "qid": 0, 00:17:04.047 "state": "enabled", 00:17:04.047 "thread": "nvmf_tgt_poll_group_000", 00:17:04.047 "listen_address": { 00:17:04.047 "trtype": "TCP", 00:17:04.047 "adrfam": "IPv4", 00:17:04.047 "traddr": "10.0.0.2", 00:17:04.047 "trsvcid": "4420" 00:17:04.047 }, 00:17:04.047 "peer_address": { 00:17:04.047 "trtype": "TCP", 00:17:04.047 "adrfam": "IPv4", 00:17:04.047 "traddr": "10.0.0.1", 00:17:04.047 "trsvcid": "59378" 00:17:04.047 }, 00:17:04.047 "auth": { 00:17:04.047 "state": "completed", 00:17:04.047 "digest": "sha384", 00:17:04.047 "dhgroup": "null" 00:17:04.047 } 00:17:04.047 } 00:17:04.047 ]' 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.047 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.305 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:17:04.874 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.874 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:04.874 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.874 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.874 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.874 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.874 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.874 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.133 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:05.133 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.133 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:05.133 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:05.133 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:05.133 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.133 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.133 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.133 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.133 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.133 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.133 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.393 00:17:05.393 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.393 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.393 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.393 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.393 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.393 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.393 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.393 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.393 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.393 { 00:17:05.393 "cntlid": 51, 00:17:05.393 "qid": 0, 00:17:05.393 "state": "enabled", 00:17:05.393 "thread": "nvmf_tgt_poll_group_000", 00:17:05.393 "listen_address": { 00:17:05.393 "trtype": "TCP", 00:17:05.393 "adrfam": "IPv4", 00:17:05.393 "traddr": "10.0.0.2", 00:17:05.393 "trsvcid": "4420" 00:17:05.393 }, 00:17:05.393 "peer_address": { 00:17:05.393 "trtype": "TCP", 00:17:05.393 "adrfam": "IPv4", 00:17:05.393 "traddr": "10.0.0.1", 00:17:05.393 "trsvcid": "59412" 00:17:05.393 }, 00:17:05.393 "auth": { 00:17:05.393 "state": "completed", 00:17:05.393 "digest": "sha384", 00:17:05.393 "dhgroup": "null" 00:17:05.393 } 00:17:05.393 } 00:17:05.393 ]' 00:17:05.393 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.652 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.652 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.652 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:05.652 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.652 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.652 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.652 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.911 19:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:17:06.480 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.480 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:06.480 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.480 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.480 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.480 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.480 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.480 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.480 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:06.480 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.480 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.481 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:06.481 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:06.481 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.481 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.481 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.481 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.481 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.481 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.481 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.740 00:17:06.740 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.740 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.740 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.999 { 00:17:06.999 "cntlid": 53, 00:17:06.999 "qid": 0, 00:17:06.999 "state": "enabled", 00:17:06.999 "thread": "nvmf_tgt_poll_group_000", 00:17:06.999 "listen_address": { 00:17:06.999 "trtype": "TCP", 00:17:06.999 "adrfam": "IPv4", 00:17:06.999 "traddr": "10.0.0.2", 00:17:06.999 "trsvcid": "4420" 00:17:06.999 }, 00:17:06.999 "peer_address": { 00:17:06.999 "trtype": "TCP", 00:17:06.999 "adrfam": "IPv4", 00:17:06.999 "traddr": "10.0.0.1", 00:17:06.999 "trsvcid": "59446" 00:17:06.999 }, 00:17:06.999 "auth": { 00:17:06.999 "state": "completed", 00:17:06.999 "digest": "sha384", 00:17:06.999 "dhgroup": "null" 00:17:06.999 } 00:17:06.999 } 00:17:06.999 ]' 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.999 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.258 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:17:07.825 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.826 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:07.826 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.826 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.826 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.826 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.826 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:07.826 19:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.085 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:08.085 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.085 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.085 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:08.085 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:08.085 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.085 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:08.085 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.085 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.085 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.085 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.085 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.344 00:17:08.344 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.344 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.344 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.344 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.344 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.344 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.344 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.344 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.344 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.344 { 00:17:08.344 "cntlid": 55, 00:17:08.344 "qid": 0, 00:17:08.344 "state": "enabled", 00:17:08.344 "thread": "nvmf_tgt_poll_group_000", 00:17:08.344 "listen_address": { 00:17:08.344 "trtype": "TCP", 00:17:08.344 "adrfam": "IPv4", 00:17:08.344 "traddr": "10.0.0.2", 00:17:08.344 "trsvcid": "4420" 00:17:08.344 }, 00:17:08.344 "peer_address": { 00:17:08.344 "trtype": "TCP", 00:17:08.344 "adrfam": "IPv4", 00:17:08.344 "traddr": "10.0.0.1", 00:17:08.344 "trsvcid": "59494" 00:17:08.344 }, 00:17:08.344 "auth": { 00:17:08.344 "state": "completed", 00:17:08.344 "digest": "sha384", 00:17:08.344 "dhgroup": "null" 00:17:08.344 } 00:17:08.344 } 00:17:08.344 ]' 00:17:08.344 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.603 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.603 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.603 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:08.603 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.603 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.603 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.603 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.862 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:09.431 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.432 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.432 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.432 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.432 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.432 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.432 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.691 00:17:09.691 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.691 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.691 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.950 { 00:17:09.950 "cntlid": 57, 00:17:09.950 "qid": 0, 00:17:09.950 "state": "enabled", 00:17:09.950 "thread": "nvmf_tgt_poll_group_000", 00:17:09.950 "listen_address": { 00:17:09.950 "trtype": "TCP", 00:17:09.950 "adrfam": "IPv4", 00:17:09.950 "traddr": "10.0.0.2", 00:17:09.950 "trsvcid": "4420" 00:17:09.950 }, 00:17:09.950 "peer_address": { 00:17:09.950 "trtype": "TCP", 00:17:09.950 "adrfam": "IPv4", 00:17:09.950 "traddr": "10.0.0.1", 00:17:09.950 "trsvcid": "59524" 00:17:09.950 }, 00:17:09.950 "auth": { 00:17:09.950 "state": "completed", 00:17:09.950 "digest": "sha384", 00:17:09.950 "dhgroup": "ffdhe2048" 00:17:09.950 } 00:17:09.950 } 00:17:09.950 ]' 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.950 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.210 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:17:10.780 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.780 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:10.780 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.780 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.780 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.780 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.780 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.780 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.039 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:11.039 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.039 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.039 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:11.039 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:11.040 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.040 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.040 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.040 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.040 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.040 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.040 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.299 00:17:11.299 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.299 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.299 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.299 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.299 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.299 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.299 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.299 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.299 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.299 { 00:17:11.299 "cntlid": 59, 00:17:11.299 "qid": 0, 00:17:11.299 "state": "enabled", 00:17:11.299 "thread": "nvmf_tgt_poll_group_000", 00:17:11.299 "listen_address": { 00:17:11.299 "trtype": "TCP", 00:17:11.299 "adrfam": "IPv4", 00:17:11.299 "traddr": "10.0.0.2", 00:17:11.299 "trsvcid": "4420" 00:17:11.299 }, 00:17:11.299 "peer_address": { 00:17:11.299 "trtype": "TCP", 00:17:11.299 "adrfam": "IPv4", 00:17:11.299 "traddr": "10.0.0.1", 00:17:11.299 "trsvcid": "47200" 00:17:11.299 }, 00:17:11.299 "auth": { 00:17:11.299 "state": "completed", 00:17:11.299 "digest": "sha384", 00:17:11.299 "dhgroup": "ffdhe2048" 00:17:11.299 } 00:17:11.299 } 00:17:11.299 ]' 00:17:11.299 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.558 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.558 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.558 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.558 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.558 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.558 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.558 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.818 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.387 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.647 00:17:12.647 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.647 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.647 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.906 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.906 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.906 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.906 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.907 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.907 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.907 { 00:17:12.907 "cntlid": 61, 00:17:12.907 "qid": 0, 00:17:12.907 "state": "enabled", 00:17:12.907 "thread": "nvmf_tgt_poll_group_000", 00:17:12.907 "listen_address": { 00:17:12.907 "trtype": "TCP", 00:17:12.907 "adrfam": "IPv4", 00:17:12.907 "traddr": "10.0.0.2", 00:17:12.907 "trsvcid": "4420" 00:17:12.907 }, 00:17:12.907 "peer_address": { 00:17:12.907 "trtype": "TCP", 00:17:12.907 "adrfam": "IPv4", 00:17:12.907 "traddr": "10.0.0.1", 00:17:12.907 "trsvcid": "47234" 00:17:12.907 }, 00:17:12.907 "auth": { 00:17:12.907 "state": "completed", 00:17:12.907 "digest": "sha384", 00:17:12.907 "dhgroup": "ffdhe2048" 00:17:12.907 } 00:17:12.907 } 00:17:12.907 ]' 00:17:12.907 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.907 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.907 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.907 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.907 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.907 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.907 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.907 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.166 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:17:13.735 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.736 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.995 00:17:13.995 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.995 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.995 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.255 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.255 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.255 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.255 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.255 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.255 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.255 { 00:17:14.255 "cntlid": 63, 00:17:14.255 "qid": 0, 00:17:14.255 "state": "enabled", 00:17:14.255 "thread": "nvmf_tgt_poll_group_000", 00:17:14.255 "listen_address": { 00:17:14.255 "trtype": "TCP", 00:17:14.255 "adrfam": "IPv4", 00:17:14.255 "traddr": "10.0.0.2", 00:17:14.255 "trsvcid": "4420" 00:17:14.255 }, 00:17:14.255 "peer_address": { 00:17:14.255 "trtype": "TCP", 00:17:14.255 "adrfam": "IPv4", 00:17:14.255 "traddr": "10.0.0.1", 00:17:14.255 "trsvcid": "47266" 00:17:14.255 }, 00:17:14.255 "auth": { 00:17:14.255 "state": "completed", 00:17:14.255 "digest": "sha384", 00:17:14.255 "dhgroup": "ffdhe2048" 00:17:14.255 } 00:17:14.255 } 00:17:14.255 ]' 00:17:14.255 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.255 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.255 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.255 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.255 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.515 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.515 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.515 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.515 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:17:15.083 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.083 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:15.083 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.083 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.083 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.083 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.083 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.083 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.083 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.342 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:15.342 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.342 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.342 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:15.342 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.342 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.342 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.342 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.342 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.342 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.342 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.342 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.601 00:17:15.601 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.601 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.602 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.861 { 00:17:15.861 "cntlid": 65, 00:17:15.861 "qid": 0, 00:17:15.861 "state": "enabled", 00:17:15.861 "thread": "nvmf_tgt_poll_group_000", 00:17:15.861 "listen_address": { 00:17:15.861 "trtype": "TCP", 00:17:15.861 "adrfam": "IPv4", 00:17:15.861 "traddr": "10.0.0.2", 00:17:15.861 "trsvcid": "4420" 00:17:15.861 }, 00:17:15.861 "peer_address": { 00:17:15.861 "trtype": "TCP", 00:17:15.861 "adrfam": "IPv4", 00:17:15.861 "traddr": "10.0.0.1", 00:17:15.861 "trsvcid": "47294" 00:17:15.861 }, 00:17:15.861 "auth": { 00:17:15.861 "state": "completed", 00:17:15.861 "digest": "sha384", 00:17:15.861 "dhgroup": "ffdhe3072" 00:17:15.861 } 00:17:15.861 } 00:17:15.861 ]' 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.861 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.124 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.754 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.014 00:17:17.014 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.014 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.014 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.282 { 00:17:17.282 "cntlid": 67, 00:17:17.282 "qid": 0, 00:17:17.282 "state": "enabled", 00:17:17.282 "thread": "nvmf_tgt_poll_group_000", 00:17:17.282 "listen_address": { 00:17:17.282 "trtype": "TCP", 00:17:17.282 "adrfam": "IPv4", 00:17:17.282 "traddr": "10.0.0.2", 00:17:17.282 "trsvcid": "4420" 00:17:17.282 }, 00:17:17.282 "peer_address": { 00:17:17.282 "trtype": "TCP", 00:17:17.282 "adrfam": "IPv4", 00:17:17.282 "traddr": "10.0.0.1", 00:17:17.282 "trsvcid": "47328" 00:17:17.282 }, 00:17:17.282 "auth": { 00:17:17.282 "state": "completed", 00:17:17.282 "digest": "sha384", 00:17:17.282 "dhgroup": "ffdhe3072" 00:17:17.282 } 00:17:17.282 } 00:17:17.282 ]' 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.282 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.548 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:17:18.116 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.116 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:18.116 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.116 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.116 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.116 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.116 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.116 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.375 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:18.375 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.375 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:18.375 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:18.375 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:18.375 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.375 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.375 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.375 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.375 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.375 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.375 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.635 00:17:18.635 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.635 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.635 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.635 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.635 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.635 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.635 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.635 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.635 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.635 { 00:17:18.635 "cntlid": 69, 00:17:18.635 "qid": 0, 00:17:18.635 "state": "enabled", 00:17:18.635 "thread": "nvmf_tgt_poll_group_000", 00:17:18.635 "listen_address": { 00:17:18.635 "trtype": "TCP", 00:17:18.635 "adrfam": "IPv4", 00:17:18.635 "traddr": "10.0.0.2", 00:17:18.635 "trsvcid": "4420" 00:17:18.635 }, 00:17:18.635 "peer_address": { 00:17:18.635 "trtype": "TCP", 00:17:18.635 "adrfam": "IPv4", 00:17:18.635 "traddr": "10.0.0.1", 00:17:18.635 "trsvcid": "47354" 00:17:18.635 }, 00:17:18.635 "auth": { 00:17:18.635 "state": "completed", 00:17:18.635 "digest": "sha384", 00:17:18.635 "dhgroup": "ffdhe3072" 00:17:18.635 } 00:17:18.635 } 00:17:18.635 ]' 00:17:18.635 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.894 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.894 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.894 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.894 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.894 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.894 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.894 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.154 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.721 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.980 00:17:19.980 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.980 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.980 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.239 { 00:17:20.239 "cntlid": 71, 00:17:20.239 "qid": 0, 00:17:20.239 "state": "enabled", 00:17:20.239 "thread": "nvmf_tgt_poll_group_000", 00:17:20.239 "listen_address": { 00:17:20.239 "trtype": "TCP", 00:17:20.239 "adrfam": "IPv4", 00:17:20.239 "traddr": "10.0.0.2", 00:17:20.239 "trsvcid": "4420" 00:17:20.239 }, 00:17:20.239 "peer_address": { 00:17:20.239 "trtype": "TCP", 00:17:20.239 "adrfam": "IPv4", 00:17:20.239 "traddr": "10.0.0.1", 00:17:20.239 "trsvcid": "59276" 00:17:20.239 }, 00:17:20.239 "auth": { 00:17:20.239 "state": "completed", 00:17:20.239 "digest": "sha384", 00:17:20.239 "dhgroup": "ffdhe3072" 00:17:20.239 } 00:17:20.239 } 00:17:20.239 ]' 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.239 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.498 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:17:21.067 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.067 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:21.067 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.067 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.067 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.067 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.067 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.067 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.067 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.326 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:21.326 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.326 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.326 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:21.326 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:21.326 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.326 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.326 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.326 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.326 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.326 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.326 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.585 00:17:21.585 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.585 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.585 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.585 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.585 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.585 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.585 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.843 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.843 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.843 { 00:17:21.843 "cntlid": 73, 00:17:21.843 "qid": 0, 00:17:21.843 "state": "enabled", 00:17:21.843 "thread": "nvmf_tgt_poll_group_000", 00:17:21.843 "listen_address": { 00:17:21.843 "trtype": "TCP", 00:17:21.843 "adrfam": "IPv4", 00:17:21.843 "traddr": "10.0.0.2", 00:17:21.843 "trsvcid": "4420" 00:17:21.843 }, 00:17:21.843 "peer_address": { 00:17:21.843 "trtype": "TCP", 00:17:21.843 "adrfam": "IPv4", 00:17:21.843 "traddr": "10.0.0.1", 00:17:21.843 "trsvcid": "59308" 00:17:21.843 }, 00:17:21.843 "auth": { 00:17:21.843 "state": "completed", 00:17:21.843 "digest": "sha384", 00:17:21.843 "dhgroup": "ffdhe4096" 00:17:21.843 } 00:17:21.843 } 00:17:21.843 ]' 00:17:21.844 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.844 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.844 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.844 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.844 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.844 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.844 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.844 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.102 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.670 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.929 00:17:22.929 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.929 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.929 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.187 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.187 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.187 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.187 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.187 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.187 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.187 { 00:17:23.187 "cntlid": 75, 00:17:23.187 "qid": 0, 00:17:23.187 "state": "enabled", 00:17:23.187 "thread": "nvmf_tgt_poll_group_000", 00:17:23.187 "listen_address": { 00:17:23.187 "trtype": "TCP", 00:17:23.187 "adrfam": "IPv4", 00:17:23.187 "traddr": "10.0.0.2", 00:17:23.187 "trsvcid": "4420" 00:17:23.187 }, 00:17:23.187 "peer_address": { 00:17:23.187 "trtype": "TCP", 00:17:23.187 "adrfam": "IPv4", 00:17:23.187 "traddr": "10.0.0.1", 00:17:23.187 "trsvcid": "59340" 00:17:23.187 }, 00:17:23.187 "auth": { 00:17:23.187 "state": "completed", 00:17:23.187 "digest": "sha384", 00:17:23.187 "dhgroup": "ffdhe4096" 00:17:23.187 } 00:17:23.187 } 00:17:23.187 ]' 00:17:23.187 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.187 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.187 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.446 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.446 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.446 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.446 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.446 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.446 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:17:24.014 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.014 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:24.014 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.014 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.014 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.014 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.014 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.014 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.272 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:24.272 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.272 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.272 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:24.272 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:24.272 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.272 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.272 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.272 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.272 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.272 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.272 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.531 00:17:24.531 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.531 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.531 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.790 { 00:17:24.790 "cntlid": 77, 00:17:24.790 "qid": 0, 00:17:24.790 "state": "enabled", 00:17:24.790 "thread": "nvmf_tgt_poll_group_000", 00:17:24.790 "listen_address": { 00:17:24.790 "trtype": "TCP", 00:17:24.790 "adrfam": "IPv4", 00:17:24.790 "traddr": "10.0.0.2", 00:17:24.790 "trsvcid": "4420" 00:17:24.790 }, 00:17:24.790 "peer_address": { 00:17:24.790 "trtype": "TCP", 00:17:24.790 "adrfam": "IPv4", 00:17:24.790 "traddr": "10.0.0.1", 00:17:24.790 "trsvcid": "59372" 00:17:24.790 }, 00:17:24.790 "auth": { 00:17:24.790 "state": "completed", 00:17:24.790 "digest": "sha384", 00:17:24.790 "dhgroup": "ffdhe4096" 00:17:24.790 } 00:17:24.790 } 00:17:24.790 ]' 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.790 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.048 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:17:25.614 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.614 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:25.614 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.614 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.614 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.614 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.614 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.614 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.872 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:25.872 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.872 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.872 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:25.872 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:25.872 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.872 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:25.872 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.872 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.872 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.872 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.872 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.872 00:17:26.131 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.131 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.131 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.131 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.131 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.131 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.131 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.131 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.131 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.131 { 00:17:26.131 "cntlid": 79, 00:17:26.131 "qid": 0, 00:17:26.131 "state": "enabled", 00:17:26.131 "thread": "nvmf_tgt_poll_group_000", 00:17:26.131 "listen_address": { 00:17:26.131 "trtype": "TCP", 00:17:26.131 "adrfam": "IPv4", 00:17:26.131 "traddr": "10.0.0.2", 00:17:26.131 "trsvcid": "4420" 00:17:26.131 }, 00:17:26.131 "peer_address": { 00:17:26.131 "trtype": "TCP", 00:17:26.131 "adrfam": "IPv4", 00:17:26.131 "traddr": "10.0.0.1", 00:17:26.131 "trsvcid": "59414" 00:17:26.131 }, 00:17:26.131 "auth": { 00:17:26.131 "state": "completed", 00:17:26.131 "digest": "sha384", 00:17:26.131 "dhgroup": "ffdhe4096" 00:17:26.131 } 00:17:26.131 } 00:17:26.131 ]' 00:17:26.131 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.131 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.131 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.390 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.390 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.390 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.390 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.390 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.390 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:17:26.959 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.959 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:26.959 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.959 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.959 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.959 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.959 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.959 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.959 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.218 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:27.218 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.218 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.218 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:27.218 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:27.219 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.219 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.219 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.219 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.219 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.219 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.219 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.477 00:17:27.477 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.477 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.477 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.735 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.735 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.735 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.735 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.735 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.735 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.735 { 00:17:27.735 "cntlid": 81, 00:17:27.735 "qid": 0, 00:17:27.735 "state": "enabled", 00:17:27.735 "thread": "nvmf_tgt_poll_group_000", 00:17:27.735 "listen_address": { 00:17:27.735 "trtype": "TCP", 00:17:27.735 "adrfam": "IPv4", 00:17:27.735 "traddr": "10.0.0.2", 00:17:27.735 "trsvcid": "4420" 00:17:27.736 }, 00:17:27.736 "peer_address": { 00:17:27.736 "trtype": "TCP", 00:17:27.736 "adrfam": "IPv4", 00:17:27.736 "traddr": "10.0.0.1", 00:17:27.736 "trsvcid": "59434" 00:17:27.736 }, 00:17:27.736 "auth": { 00:17:27.736 "state": "completed", 00:17:27.736 "digest": "sha384", 00:17:27.736 "dhgroup": "ffdhe6144" 00:17:27.736 } 00:17:27.736 } 00:17:27.736 ]' 00:17:27.736 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.736 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.736 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.736 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.736 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.994 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.994 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.994 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.994 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:17:28.563 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.563 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:28.563 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.563 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.563 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.563 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.563 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.563 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.821 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:28.821 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.821 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.821 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:28.821 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:28.821 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.821 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.821 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.821 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.822 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.822 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.822 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.080 00:17:29.080 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.080 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.080 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.358 { 00:17:29.358 "cntlid": 83, 00:17:29.358 "qid": 0, 00:17:29.358 "state": "enabled", 00:17:29.358 "thread": "nvmf_tgt_poll_group_000", 00:17:29.358 "listen_address": { 00:17:29.358 "trtype": "TCP", 00:17:29.358 "adrfam": "IPv4", 00:17:29.358 "traddr": "10.0.0.2", 00:17:29.358 "trsvcid": "4420" 00:17:29.358 }, 00:17:29.358 "peer_address": { 00:17:29.358 "trtype": "TCP", 00:17:29.358 "adrfam": "IPv4", 00:17:29.358 "traddr": "10.0.0.1", 00:17:29.358 "trsvcid": "59448" 00:17:29.358 }, 00:17:29.358 "auth": { 00:17:29.358 "state": "completed", 00:17:29.358 "digest": "sha384", 00:17:29.358 "dhgroup": "ffdhe6144" 00:17:29.358 } 00:17:29.358 } 00:17:29.358 ]' 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.358 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.645 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.213 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.473 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.473 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.473 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.732 00:17:30.732 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.732 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.732 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.732 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.732 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.732 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.732 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.991 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.991 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.991 { 00:17:30.991 "cntlid": 85, 00:17:30.991 "qid": 0, 00:17:30.991 "state": "enabled", 00:17:30.991 "thread": "nvmf_tgt_poll_group_000", 00:17:30.991 "listen_address": { 00:17:30.991 "trtype": "TCP", 00:17:30.991 "adrfam": "IPv4", 00:17:30.991 "traddr": "10.0.0.2", 00:17:30.991 "trsvcid": "4420" 00:17:30.991 }, 00:17:30.991 "peer_address": { 00:17:30.991 "trtype": "TCP", 00:17:30.991 "adrfam": "IPv4", 00:17:30.991 "traddr": "10.0.0.1", 00:17:30.991 "trsvcid": "39758" 00:17:30.991 }, 00:17:30.991 "auth": { 00:17:30.991 "state": "completed", 00:17:30.991 "digest": "sha384", 00:17:30.991 "dhgroup": "ffdhe6144" 00:17:30.991 } 00:17:30.991 } 00:17:30.991 ]' 00:17:30.991 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.991 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.991 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.991 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.991 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.991 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.991 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.991 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.250 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:17:31.818 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.818 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:31.818 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.818 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.818 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.818 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.818 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.818 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.818 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:31.818 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.819 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.819 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:31.819 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.819 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.819 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:31.819 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.819 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.819 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.819 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.819 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.386 00:17:32.386 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.386 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.386 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.386 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.386 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.386 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.386 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.386 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.386 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.386 { 00:17:32.386 "cntlid": 87, 00:17:32.386 "qid": 0, 00:17:32.386 "state": "enabled", 00:17:32.386 "thread": "nvmf_tgt_poll_group_000", 00:17:32.386 "listen_address": { 00:17:32.386 "trtype": "TCP", 00:17:32.386 "adrfam": "IPv4", 00:17:32.386 "traddr": "10.0.0.2", 00:17:32.386 "trsvcid": "4420" 00:17:32.386 }, 00:17:32.386 "peer_address": { 00:17:32.386 "trtype": "TCP", 00:17:32.386 "adrfam": "IPv4", 00:17:32.386 "traddr": "10.0.0.1", 00:17:32.386 "trsvcid": "39784" 00:17:32.386 }, 00:17:32.386 "auth": { 00:17:32.386 "state": "completed", 00:17:32.386 "digest": "sha384", 00:17:32.386 "dhgroup": "ffdhe6144" 00:17:32.386 } 00:17:32.386 } 00:17:32.386 ]' 00:17:32.386 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.387 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.387 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.387 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.387 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.645 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.645 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.645 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.645 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:17:33.211 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.211 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:33.211 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.211 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.211 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.211 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.211 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.211 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.211 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.470 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:33.470 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.470 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.470 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:33.470 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.470 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.470 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.470 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.470 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.470 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.470 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.470 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.038 00:17:34.038 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.038 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.038 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.038 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.038 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.038 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.038 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.038 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.038 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.038 { 00:17:34.038 "cntlid": 89, 00:17:34.038 "qid": 0, 00:17:34.038 "state": "enabled", 00:17:34.038 "thread": "nvmf_tgt_poll_group_000", 00:17:34.038 "listen_address": { 00:17:34.038 "trtype": "TCP", 00:17:34.038 "adrfam": "IPv4", 00:17:34.038 "traddr": "10.0.0.2", 00:17:34.038 "trsvcid": "4420" 00:17:34.038 }, 00:17:34.038 "peer_address": { 00:17:34.038 "trtype": "TCP", 00:17:34.038 "adrfam": "IPv4", 00:17:34.038 "traddr": "10.0.0.1", 00:17:34.038 "trsvcid": "39810" 00:17:34.038 }, 00:17:34.038 "auth": { 00:17:34.038 "state": "completed", 00:17:34.038 "digest": "sha384", 00:17:34.038 "dhgroup": "ffdhe8192" 00:17:34.038 } 00:17:34.038 } 00:17:34.038 ]' 00:17:34.038 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.038 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.038 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.297 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.297 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.297 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.297 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.297 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.297 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:17:34.863 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.863 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:34.863 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.863 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.863 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.863 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.863 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.863 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.122 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:35.122 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.122 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.122 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:35.122 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.122 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.122 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.122 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.122 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.122 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.122 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.122 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.688 00:17:35.688 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.688 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.688 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.688 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.688 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.688 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.688 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.688 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.688 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.688 { 00:17:35.688 "cntlid": 91, 00:17:35.688 "qid": 0, 00:17:35.688 "state": "enabled", 00:17:35.688 "thread": "nvmf_tgt_poll_group_000", 00:17:35.688 "listen_address": { 00:17:35.688 "trtype": "TCP", 00:17:35.688 "adrfam": "IPv4", 00:17:35.689 "traddr": "10.0.0.2", 00:17:35.689 "trsvcid": "4420" 00:17:35.689 }, 00:17:35.689 "peer_address": { 00:17:35.689 "trtype": "TCP", 00:17:35.689 "adrfam": "IPv4", 00:17:35.689 "traddr": "10.0.0.1", 00:17:35.689 "trsvcid": "39846" 00:17:35.689 }, 00:17:35.689 "auth": { 00:17:35.689 "state": "completed", 00:17:35.689 "digest": "sha384", 00:17:35.689 "dhgroup": "ffdhe8192" 00:17:35.689 } 00:17:35.689 } 00:17:35.689 ]' 00:17:35.689 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.689 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.689 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.947 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.947 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.947 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.947 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.947 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.947 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:17:36.515 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.515 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:36.515 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.515 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.515 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.515 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.515 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.515 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.773 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:36.773 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.773 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:36.773 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:36.773 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:36.773 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.773 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.773 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.773 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.773 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.773 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.773 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.340 00:17:37.340 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.340 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.340 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.340 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.340 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.340 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.340 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.340 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.340 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.340 { 00:17:37.340 "cntlid": 93, 00:17:37.340 "qid": 0, 00:17:37.340 "state": "enabled", 00:17:37.340 "thread": "nvmf_tgt_poll_group_000", 00:17:37.340 "listen_address": { 00:17:37.340 "trtype": "TCP", 00:17:37.340 "adrfam": "IPv4", 00:17:37.340 "traddr": "10.0.0.2", 00:17:37.340 "trsvcid": "4420" 00:17:37.340 }, 00:17:37.340 "peer_address": { 00:17:37.340 "trtype": "TCP", 00:17:37.340 "adrfam": "IPv4", 00:17:37.340 "traddr": "10.0.0.1", 00:17:37.340 "trsvcid": "39876" 00:17:37.340 }, 00:17:37.340 "auth": { 00:17:37.340 "state": "completed", 00:17:37.340 "digest": "sha384", 00:17:37.340 "dhgroup": "ffdhe8192" 00:17:37.340 } 00:17:37.340 } 00:17:37.340 ]' 00:17:37.340 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.340 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.341 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.599 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.599 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.599 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.599 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.599 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.858 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:17:38.426 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.426 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:38.426 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.426 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.426 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.426 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.426 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.426 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.426 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:38.427 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.427 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.427 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:38.427 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:38.427 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.427 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:38.427 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.427 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.427 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.427 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.427 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.995 00:17:38.995 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.995 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.995 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.995 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.995 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.995 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.995 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.995 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.995 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.995 { 00:17:38.995 "cntlid": 95, 00:17:38.995 "qid": 0, 00:17:38.995 "state": "enabled", 00:17:38.995 "thread": "nvmf_tgt_poll_group_000", 00:17:38.995 "listen_address": { 00:17:38.995 "trtype": "TCP", 00:17:38.995 "adrfam": "IPv4", 00:17:38.995 "traddr": "10.0.0.2", 00:17:38.995 "trsvcid": "4420" 00:17:38.995 }, 00:17:38.995 "peer_address": { 00:17:38.995 "trtype": "TCP", 00:17:38.995 "adrfam": "IPv4", 00:17:38.995 "traddr": "10.0.0.1", 00:17:38.995 "trsvcid": "39908" 00:17:38.995 }, 00:17:38.995 "auth": { 00:17:38.995 "state": "completed", 00:17:38.995 "digest": "sha384", 00:17:38.995 "dhgroup": "ffdhe8192" 00:17:38.995 } 00:17:38.995 } 00:17:38.995 ]' 00:17:39.254 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.254 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.254 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.254 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.254 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.254 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.254 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.254 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.513 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.082 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.341 00:17:40.341 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.341 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.341 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.600 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.601 { 00:17:40.601 "cntlid": 97, 00:17:40.601 "qid": 0, 00:17:40.601 "state": "enabled", 00:17:40.601 "thread": "nvmf_tgt_poll_group_000", 00:17:40.601 "listen_address": { 00:17:40.601 "trtype": "TCP", 00:17:40.601 "adrfam": "IPv4", 00:17:40.601 "traddr": "10.0.0.2", 00:17:40.601 "trsvcid": "4420" 00:17:40.601 }, 00:17:40.601 "peer_address": { 00:17:40.601 "trtype": "TCP", 00:17:40.601 "adrfam": "IPv4", 00:17:40.601 "traddr": "10.0.0.1", 00:17:40.601 "trsvcid": "41298" 00:17:40.601 }, 00:17:40.601 "auth": { 00:17:40.601 "state": "completed", 00:17:40.601 "digest": "sha512", 00:17:40.601 "dhgroup": "null" 00:17:40.601 } 00:17:40.601 } 00:17:40.601 ]' 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.601 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.860 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:17:41.428 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.428 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:41.428 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.428 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.428 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.428 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.428 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.428 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.688 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:41.688 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.688 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.688 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:41.688 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:41.688 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.688 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.688 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.688 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.688 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.688 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.688 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.947 00:17:41.947 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.948 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.948 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.948 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.948 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.948 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.948 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.948 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.948 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.948 { 00:17:41.948 "cntlid": 99, 00:17:41.948 "qid": 0, 00:17:41.948 "state": "enabled", 00:17:41.948 "thread": "nvmf_tgt_poll_group_000", 00:17:41.948 "listen_address": { 00:17:41.948 "trtype": "TCP", 00:17:41.948 "adrfam": "IPv4", 00:17:41.948 "traddr": "10.0.0.2", 00:17:41.948 "trsvcid": "4420" 00:17:41.948 }, 00:17:41.948 "peer_address": { 00:17:41.948 "trtype": "TCP", 00:17:41.948 "adrfam": "IPv4", 00:17:41.948 "traddr": "10.0.0.1", 00:17:41.948 "trsvcid": "41318" 00:17:41.948 }, 00:17:41.948 "auth": { 00:17:41.948 "state": "completed", 00:17:41.948 "digest": "sha512", 00:17:41.948 "dhgroup": "null" 00:17:41.948 } 00:17:41.948 } 00:17:41.948 ]' 00:17:41.948 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.207 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.207 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.207 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:42.207 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.207 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.207 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.207 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.466 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:17:42.798 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.798 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:42.798 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.798 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.798 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.798 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.799 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.058 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.317 00:17:43.317 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.317 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.317 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.576 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.576 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.576 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.576 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.576 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.576 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.576 { 00:17:43.576 "cntlid": 101, 00:17:43.576 "qid": 0, 00:17:43.576 "state": "enabled", 00:17:43.576 "thread": "nvmf_tgt_poll_group_000", 00:17:43.576 "listen_address": { 00:17:43.576 "trtype": "TCP", 00:17:43.576 "adrfam": "IPv4", 00:17:43.576 "traddr": "10.0.0.2", 00:17:43.576 "trsvcid": "4420" 00:17:43.576 }, 00:17:43.576 "peer_address": { 00:17:43.576 "trtype": "TCP", 00:17:43.576 "adrfam": "IPv4", 00:17:43.576 "traddr": "10.0.0.1", 00:17:43.577 "trsvcid": "41336" 00:17:43.577 }, 00:17:43.577 "auth": { 00:17:43.577 "state": "completed", 00:17:43.577 "digest": "sha512", 00:17:43.577 "dhgroup": "null" 00:17:43.577 } 00:17:43.577 } 00:17:43.577 ]' 00:17:43.577 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.577 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.577 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.577 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:43.577 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.577 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.577 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.577 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.836 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:17:44.404 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.404 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:44.404 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.404 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.404 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.404 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.404 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.404 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.663 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:44.664 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.664 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:44.664 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:44.664 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:44.664 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.664 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:44.664 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.664 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.664 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.664 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.664 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.923 00:17:44.923 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.923 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.923 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.923 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.923 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.923 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.923 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.923 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.923 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.923 { 00:17:44.923 "cntlid": 103, 00:17:44.923 "qid": 0, 00:17:44.923 "state": "enabled", 00:17:44.923 "thread": "nvmf_tgt_poll_group_000", 00:17:44.923 "listen_address": { 00:17:44.923 "trtype": "TCP", 00:17:44.923 "adrfam": "IPv4", 00:17:44.923 "traddr": "10.0.0.2", 00:17:44.923 "trsvcid": "4420" 00:17:44.923 }, 00:17:44.923 "peer_address": { 00:17:44.923 "trtype": "TCP", 00:17:44.923 "adrfam": "IPv4", 00:17:44.923 "traddr": "10.0.0.1", 00:17:44.923 "trsvcid": "41372" 00:17:44.923 }, 00:17:44.923 "auth": { 00:17:44.923 "state": "completed", 00:17:44.923 "digest": "sha512", 00:17:44.923 "dhgroup": "null" 00:17:44.923 } 00:17:44.923 } 00:17:44.923 ]' 00:17:44.923 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.923 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.183 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.183 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:45.183 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.183 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.183 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.183 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.183 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:17:45.751 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.752 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:45.752 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.752 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.752 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.752 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.752 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.752 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.752 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.011 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:46.011 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.011 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.011 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:46.011 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.011 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.011 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.011 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.011 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.011 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.011 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.011 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.270 00:17:46.270 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.270 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.270 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.530 { 00:17:46.530 "cntlid": 105, 00:17:46.530 "qid": 0, 00:17:46.530 "state": "enabled", 00:17:46.530 "thread": "nvmf_tgt_poll_group_000", 00:17:46.530 "listen_address": { 00:17:46.530 "trtype": "TCP", 00:17:46.530 "adrfam": "IPv4", 00:17:46.530 "traddr": "10.0.0.2", 00:17:46.530 "trsvcid": "4420" 00:17:46.530 }, 00:17:46.530 "peer_address": { 00:17:46.530 "trtype": "TCP", 00:17:46.530 "adrfam": "IPv4", 00:17:46.530 "traddr": "10.0.0.1", 00:17:46.530 "trsvcid": "41402" 00:17:46.530 }, 00:17:46.530 "auth": { 00:17:46.530 "state": "completed", 00:17:46.530 "digest": "sha512", 00:17:46.530 "dhgroup": "ffdhe2048" 00:17:46.530 } 00:17:46.530 } 00:17:46.530 ]' 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.530 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.789 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:17:47.357 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.357 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:47.357 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.357 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.357 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.357 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.357 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.357 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.616 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.616 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.876 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.876 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.876 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.876 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.876 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.876 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.876 { 00:17:47.876 "cntlid": 107, 00:17:47.876 "qid": 0, 00:17:47.876 "state": "enabled", 00:17:47.876 "thread": "nvmf_tgt_poll_group_000", 00:17:47.876 "listen_address": { 00:17:47.876 "trtype": "TCP", 00:17:47.876 "adrfam": "IPv4", 00:17:47.876 "traddr": "10.0.0.2", 00:17:47.876 "trsvcid": "4420" 00:17:47.876 }, 00:17:47.876 "peer_address": { 00:17:47.876 "trtype": "TCP", 00:17:47.876 "adrfam": "IPv4", 00:17:47.876 "traddr": "10.0.0.1", 00:17:47.876 "trsvcid": "41430" 00:17:47.876 }, 00:17:47.876 "auth": { 00:17:47.876 "state": "completed", 00:17:47.876 "digest": "sha512", 00:17:47.876 "dhgroup": "ffdhe2048" 00:17:47.876 } 00:17:47.876 } 00:17:47.876 ]' 00:17:47.876 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.876 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.876 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.136 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.136 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.136 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.136 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.136 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.136 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:17:48.704 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.704 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:48.704 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.704 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.704 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.704 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.704 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.704 19:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.964 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:48.964 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.964 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.964 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:48.964 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:48.964 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.964 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.964 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.964 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.964 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.964 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.964 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.223 00:17:49.223 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.223 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.223 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.482 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.482 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.482 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.483 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.483 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.483 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.483 { 00:17:49.483 "cntlid": 109, 00:17:49.483 "qid": 0, 00:17:49.483 "state": "enabled", 00:17:49.483 "thread": "nvmf_tgt_poll_group_000", 00:17:49.483 "listen_address": { 00:17:49.483 "trtype": "TCP", 00:17:49.483 "adrfam": "IPv4", 00:17:49.483 "traddr": "10.0.0.2", 00:17:49.483 "trsvcid": "4420" 00:17:49.483 }, 00:17:49.483 "peer_address": { 00:17:49.483 "trtype": "TCP", 00:17:49.483 "adrfam": "IPv4", 00:17:49.483 "traddr": "10.0.0.1", 00:17:49.483 "trsvcid": "41456" 00:17:49.483 }, 00:17:49.483 "auth": { 00:17:49.483 "state": "completed", 00:17:49.483 "digest": "sha512", 00:17:49.483 "dhgroup": "ffdhe2048" 00:17:49.483 } 00:17:49.483 } 00:17:49.483 ]' 00:17:49.483 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.483 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.483 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.483 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.483 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.483 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.483 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.483 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.742 19:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.310 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.570 00:17:50.570 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.570 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.570 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.829 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.829 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.829 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.829 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.829 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.829 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.829 { 00:17:50.829 "cntlid": 111, 00:17:50.829 "qid": 0, 00:17:50.829 "state": "enabled", 00:17:50.829 "thread": "nvmf_tgt_poll_group_000", 00:17:50.829 "listen_address": { 00:17:50.829 "trtype": "TCP", 00:17:50.829 "adrfam": "IPv4", 00:17:50.829 "traddr": "10.0.0.2", 00:17:50.829 "trsvcid": "4420" 00:17:50.829 }, 00:17:50.829 "peer_address": { 00:17:50.829 "trtype": "TCP", 00:17:50.829 "adrfam": "IPv4", 00:17:50.829 "traddr": "10.0.0.1", 00:17:50.829 "trsvcid": "36426" 00:17:50.829 }, 00:17:50.829 "auth": { 00:17:50.829 "state": "completed", 00:17:50.829 "digest": "sha512", 00:17:50.829 "dhgroup": "ffdhe2048" 00:17:50.829 } 00:17:50.829 } 00:17:50.829 ]' 00:17:50.829 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.829 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.829 19:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.829 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.829 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.088 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.088 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.088 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.088 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:17:51.656 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.656 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:51.656 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.656 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.656 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.656 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.656 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.656 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.656 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.916 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:51.916 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.916 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.916 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:51.916 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:51.916 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.916 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.916 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.916 19:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.916 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.916 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.916 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.175 00:17:52.175 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.175 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.175 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.434 { 00:17:52.434 "cntlid": 113, 00:17:52.434 "qid": 0, 00:17:52.434 "state": "enabled", 00:17:52.434 "thread": "nvmf_tgt_poll_group_000", 00:17:52.434 "listen_address": { 00:17:52.434 "trtype": "TCP", 00:17:52.434 "adrfam": "IPv4", 00:17:52.434 "traddr": "10.0.0.2", 00:17:52.434 "trsvcid": "4420" 00:17:52.434 }, 00:17:52.434 "peer_address": { 00:17:52.434 "trtype": "TCP", 00:17:52.434 "adrfam": "IPv4", 00:17:52.434 "traddr": "10.0.0.1", 00:17:52.434 "trsvcid": "36462" 00:17:52.434 }, 00:17:52.434 "auth": { 00:17:52.434 "state": "completed", 00:17:52.434 "digest": "sha512", 00:17:52.434 "dhgroup": "ffdhe3072" 00:17:52.434 } 00:17:52.434 } 00:17:52.434 ]' 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.434 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.435 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.694 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.262 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.521 00:17:53.521 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.521 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.521 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.780 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.780 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.780 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.780 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.780 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.780 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.780 { 00:17:53.780 "cntlid": 115, 00:17:53.780 "qid": 0, 00:17:53.780 "state": "enabled", 00:17:53.780 "thread": "nvmf_tgt_poll_group_000", 00:17:53.780 "listen_address": { 00:17:53.780 "trtype": "TCP", 00:17:53.780 "adrfam": "IPv4", 00:17:53.780 "traddr": "10.0.0.2", 00:17:53.780 "trsvcid": "4420" 00:17:53.780 }, 00:17:53.780 "peer_address": { 00:17:53.780 "trtype": "TCP", 00:17:53.780 "adrfam": "IPv4", 00:17:53.780 "traddr": "10.0.0.1", 00:17:53.780 "trsvcid": "36490" 00:17:53.780 }, 00:17:53.780 "auth": { 00:17:53.780 "state": "completed", 00:17:53.780 "digest": "sha512", 00:17:53.780 "dhgroup": "ffdhe3072" 00:17:53.780 } 00:17:53.780 } 00:17:53.780 ]' 00:17:53.780 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.780 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.780 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.780 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.780 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.040 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.040 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.040 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.040 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:17:54.607 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.607 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:54.607 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.607 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.607 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.607 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.607 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.607 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.869 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:54.869 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.869 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.869 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:54.869 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.869 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.869 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.869 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.869 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.869 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.869 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.869 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.127 00:17:55.127 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.127 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.127 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.386 { 00:17:55.386 "cntlid": 117, 00:17:55.386 "qid": 0, 00:17:55.386 "state": "enabled", 00:17:55.386 "thread": "nvmf_tgt_poll_group_000", 00:17:55.386 "listen_address": { 00:17:55.386 "trtype": "TCP", 00:17:55.386 "adrfam": "IPv4", 00:17:55.386 "traddr": "10.0.0.2", 00:17:55.386 "trsvcid": "4420" 00:17:55.386 }, 00:17:55.386 "peer_address": { 00:17:55.386 "trtype": "TCP", 00:17:55.386 "adrfam": "IPv4", 00:17:55.386 "traddr": "10.0.0.1", 00:17:55.386 "trsvcid": "36524" 00:17:55.386 }, 00:17:55.386 "auth": { 00:17:55.386 "state": "completed", 00:17:55.386 "digest": "sha512", 00:17:55.386 "dhgroup": "ffdhe3072" 00:17:55.386 } 00:17:55.386 } 00:17:55.386 ]' 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.386 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.647 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.214 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.473 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.473 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.473 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.473 00:17:56.473 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.473 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.473 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.733 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.733 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.733 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.733 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.733 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.733 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.733 { 00:17:56.733 "cntlid": 119, 00:17:56.733 "qid": 0, 00:17:56.733 "state": "enabled", 00:17:56.733 "thread": "nvmf_tgt_poll_group_000", 00:17:56.733 "listen_address": { 00:17:56.733 "trtype": "TCP", 00:17:56.733 "adrfam": "IPv4", 00:17:56.733 "traddr": "10.0.0.2", 00:17:56.733 "trsvcid": "4420" 00:17:56.733 }, 00:17:56.733 "peer_address": { 00:17:56.733 "trtype": "TCP", 00:17:56.733 "adrfam": "IPv4", 00:17:56.733 "traddr": "10.0.0.1", 00:17:56.733 "trsvcid": "36552" 00:17:56.733 }, 00:17:56.733 "auth": { 00:17:56.733 "state": "completed", 00:17:56.733 "digest": "sha512", 00:17:56.733 "dhgroup": "ffdhe3072" 00:17:56.733 } 00:17:56.733 } 00:17:56.733 ]' 00:17:56.733 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.733 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.733 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.733 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.733 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.992 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.992 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.992 19:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.992 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:17:57.560 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.560 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:57.560 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.560 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.560 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.560 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.560 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.560 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.560 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.819 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:57.819 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.819 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.819 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:57.819 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:57.819 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.819 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.819 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.819 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.819 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.819 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.819 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.078 00:17:58.078 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.078 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.078 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.336 { 00:17:58.336 "cntlid": 121, 00:17:58.336 "qid": 0, 00:17:58.336 "state": "enabled", 00:17:58.336 "thread": "nvmf_tgt_poll_group_000", 00:17:58.336 "listen_address": { 00:17:58.336 "trtype": "TCP", 00:17:58.336 "adrfam": "IPv4", 00:17:58.336 "traddr": "10.0.0.2", 00:17:58.336 "trsvcid": "4420" 00:17:58.336 }, 00:17:58.336 "peer_address": { 00:17:58.336 "trtype": "TCP", 00:17:58.336 "adrfam": "IPv4", 00:17:58.336 "traddr": "10.0.0.1", 00:17:58.336 "trsvcid": "36576" 00:17:58.336 }, 00:17:58.336 "auth": { 00:17:58.336 "state": "completed", 00:17:58.336 "digest": "sha512", 00:17:58.336 "dhgroup": "ffdhe4096" 00:17:58.336 } 00:17:58.336 } 00:17:58.336 ]' 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.336 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.595 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.164 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.423 00:17:59.423 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.423 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.423 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.682 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.682 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.682 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.682 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.682 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.682 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.682 { 00:17:59.682 "cntlid": 123, 00:17:59.682 "qid": 0, 00:17:59.682 "state": "enabled", 00:17:59.682 "thread": "nvmf_tgt_poll_group_000", 00:17:59.682 "listen_address": { 00:17:59.682 "trtype": "TCP", 00:17:59.682 "adrfam": "IPv4", 00:17:59.682 "traddr": "10.0.0.2", 00:17:59.682 "trsvcid": "4420" 00:17:59.682 }, 00:17:59.682 "peer_address": { 00:17:59.682 "trtype": "TCP", 00:17:59.682 "adrfam": "IPv4", 00:17:59.682 "traddr": "10.0.0.1", 00:17:59.682 "trsvcid": "36600" 00:17:59.682 }, 00:17:59.682 "auth": { 00:17:59.682 "state": "completed", 00:17:59.682 "digest": "sha512", 00:17:59.682 "dhgroup": "ffdhe4096" 00:17:59.682 } 00:17:59.682 } 00:17:59.682 ]' 00:17:59.682 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.682 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.682 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.941 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.941 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.941 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.941 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.941 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.941 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:18:00.510 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.510 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:00.510 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.510 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.510 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.510 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.510 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.510 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.769 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:00.769 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.769 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.769 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:00.769 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:00.769 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.769 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.769 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.769 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.769 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.769 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.769 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.028 00:18:01.028 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.028 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.028 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.286 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.286 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.286 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.286 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.286 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.286 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.286 { 00:18:01.286 "cntlid": 125, 00:18:01.286 "qid": 0, 00:18:01.286 "state": "enabled", 00:18:01.286 "thread": "nvmf_tgt_poll_group_000", 00:18:01.286 "listen_address": { 00:18:01.286 "trtype": "TCP", 00:18:01.286 "adrfam": "IPv4", 00:18:01.286 "traddr": "10.0.0.2", 00:18:01.286 "trsvcid": "4420" 00:18:01.286 }, 00:18:01.286 "peer_address": { 00:18:01.286 "trtype": "TCP", 00:18:01.286 "adrfam": "IPv4", 00:18:01.286 "traddr": "10.0.0.1", 00:18:01.286 "trsvcid": "56828" 00:18:01.286 }, 00:18:01.286 "auth": { 00:18:01.286 "state": "completed", 00:18:01.286 "digest": "sha512", 00:18:01.287 "dhgroup": "ffdhe4096" 00:18:01.287 } 00:18:01.287 } 00:18:01.287 ]' 00:18:01.287 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.287 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.287 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.287 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.287 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.287 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.287 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.287 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.545 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:02.112 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.372 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:02.372 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.372 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.372 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.372 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.372 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.372 00:18:02.630 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.630 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.630 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.630 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.630 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.631 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.631 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.631 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.631 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.631 { 00:18:02.631 "cntlid": 127, 00:18:02.631 "qid": 0, 00:18:02.631 "state": "enabled", 00:18:02.631 "thread": "nvmf_tgt_poll_group_000", 00:18:02.631 "listen_address": { 00:18:02.631 "trtype": "TCP", 00:18:02.631 "adrfam": "IPv4", 00:18:02.631 "traddr": "10.0.0.2", 00:18:02.631 "trsvcid": "4420" 00:18:02.631 }, 00:18:02.631 "peer_address": { 00:18:02.631 "trtype": "TCP", 00:18:02.631 "adrfam": "IPv4", 00:18:02.631 "traddr": "10.0.0.1", 00:18:02.631 "trsvcid": "56850" 00:18:02.631 }, 00:18:02.631 "auth": { 00:18:02.631 "state": "completed", 00:18:02.631 "digest": "sha512", 00:18:02.631 "dhgroup": "ffdhe4096" 00:18:02.631 } 00:18:02.631 } 00:18:02.631 ]' 00:18:02.631 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.631 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.631 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.889 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.889 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.889 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.889 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.889 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.889 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:18:03.456 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.456 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:03.456 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.456 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.456 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.456 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.456 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.456 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.456 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.715 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:03.715 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.715 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.715 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:03.715 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:03.715 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.715 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.715 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.715 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.715 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.715 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.715 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.973 00:18:03.973 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.973 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.973 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.232 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.232 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.232 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.232 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.232 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.232 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.232 { 00:18:04.232 "cntlid": 129, 00:18:04.232 "qid": 0, 00:18:04.232 "state": "enabled", 00:18:04.232 "thread": "nvmf_tgt_poll_group_000", 00:18:04.232 "listen_address": { 00:18:04.232 "trtype": "TCP", 00:18:04.232 "adrfam": "IPv4", 00:18:04.232 "traddr": "10.0.0.2", 00:18:04.232 "trsvcid": "4420" 00:18:04.232 }, 00:18:04.232 "peer_address": { 00:18:04.232 "trtype": "TCP", 00:18:04.232 "adrfam": "IPv4", 00:18:04.232 "traddr": "10.0.0.1", 00:18:04.232 "trsvcid": "56884" 00:18:04.232 }, 00:18:04.232 "auth": { 00:18:04.232 "state": "completed", 00:18:04.232 "digest": "sha512", 00:18:04.232 "dhgroup": "ffdhe6144" 00:18:04.232 } 00:18:04.232 } 00:18:04.232 ]' 00:18:04.232 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.232 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.232 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.491 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.491 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.491 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.491 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.491 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.491 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:18:05.059 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.059 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:05.059 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.059 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.059 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.059 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.059 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.059 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.317 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:05.317 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.318 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.318 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.318 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:05.318 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.318 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.318 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.318 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.318 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.318 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.318 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.576 00:18:05.577 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.577 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.577 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.836 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.836 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.836 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.836 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.836 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.836 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.836 { 00:18:05.836 "cntlid": 131, 00:18:05.836 "qid": 0, 00:18:05.836 "state": "enabled", 00:18:05.836 "thread": "nvmf_tgt_poll_group_000", 00:18:05.836 "listen_address": { 00:18:05.836 "trtype": "TCP", 00:18:05.836 "adrfam": "IPv4", 00:18:05.836 "traddr": "10.0.0.2", 00:18:05.836 "trsvcid": "4420" 00:18:05.836 }, 00:18:05.836 "peer_address": { 00:18:05.836 "trtype": "TCP", 00:18:05.836 "adrfam": "IPv4", 00:18:05.836 "traddr": "10.0.0.1", 00:18:05.836 "trsvcid": "56906" 00:18:05.836 }, 00:18:05.836 "auth": { 00:18:05.836 "state": "completed", 00:18:05.836 "digest": "sha512", 00:18:05.836 "dhgroup": "ffdhe6144" 00:18:05.836 } 00:18:05.836 } 00:18:05.836 ]' 00:18:05.836 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.836 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.836 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.836 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.836 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.836 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.095 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.095 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.095 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:18:06.663 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.663 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:06.663 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.663 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.663 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.663 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.663 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.663 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.922 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:06.922 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.922 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.922 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:06.922 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:06.922 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.922 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.922 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.922 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.922 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.922 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.922 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.181 00:18:07.181 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.181 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.181 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.439 { 00:18:07.439 "cntlid": 133, 00:18:07.439 "qid": 0, 00:18:07.439 "state": "enabled", 00:18:07.439 "thread": "nvmf_tgt_poll_group_000", 00:18:07.439 "listen_address": { 00:18:07.439 "trtype": "TCP", 00:18:07.439 "adrfam": "IPv4", 00:18:07.439 "traddr": "10.0.0.2", 00:18:07.439 "trsvcid": "4420" 00:18:07.439 }, 00:18:07.439 "peer_address": { 00:18:07.439 "trtype": "TCP", 00:18:07.439 "adrfam": "IPv4", 00:18:07.439 "traddr": "10.0.0.1", 00:18:07.439 "trsvcid": "56938" 00:18:07.439 }, 00:18:07.439 "auth": { 00:18:07.439 "state": "completed", 00:18:07.439 "digest": "sha512", 00:18:07.439 "dhgroup": "ffdhe6144" 00:18:07.439 } 00:18:07.439 } 00:18:07.439 ]' 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.439 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.697 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:18:08.265 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.265 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:08.265 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.265 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.265 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.265 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.265 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.265 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.565 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:08.565 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.565 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:08.565 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:08.565 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:08.565 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.565 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:08.565 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.565 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.565 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.565 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.565 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.856 00:18:08.856 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.856 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.856 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.856 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.856 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.856 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.856 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.856 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.115 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.115 { 00:18:09.115 "cntlid": 135, 00:18:09.115 "qid": 0, 00:18:09.115 "state": "enabled", 00:18:09.115 "thread": "nvmf_tgt_poll_group_000", 00:18:09.115 "listen_address": { 00:18:09.115 "trtype": "TCP", 00:18:09.115 "adrfam": "IPv4", 00:18:09.115 "traddr": "10.0.0.2", 00:18:09.115 "trsvcid": "4420" 00:18:09.115 }, 00:18:09.115 "peer_address": { 00:18:09.115 "trtype": "TCP", 00:18:09.115 "adrfam": "IPv4", 00:18:09.115 "traddr": "10.0.0.1", 00:18:09.115 "trsvcid": "56962" 00:18:09.115 }, 00:18:09.115 "auth": { 00:18:09.115 "state": "completed", 00:18:09.115 "digest": "sha512", 00:18:09.115 "dhgroup": "ffdhe6144" 00:18:09.115 } 00:18:09.115 } 00:18:09.115 ]' 00:18:09.115 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.115 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.115 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.115 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.115 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.115 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.115 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.115 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.374 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:18:09.942 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.942 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:09.942 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.943 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.943 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.943 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.943 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.943 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.943 19:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.943 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:09.943 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.943 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.943 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:09.943 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:09.943 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.943 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.943 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.943 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.943 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.943 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.943 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.511 00:18:10.511 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.511 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.511 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.770 { 00:18:10.770 "cntlid": 137, 00:18:10.770 "qid": 0, 00:18:10.770 "state": "enabled", 00:18:10.770 "thread": "nvmf_tgt_poll_group_000", 00:18:10.770 "listen_address": { 00:18:10.770 "trtype": "TCP", 00:18:10.770 "adrfam": "IPv4", 00:18:10.770 "traddr": "10.0.0.2", 00:18:10.770 "trsvcid": "4420" 00:18:10.770 }, 00:18:10.770 "peer_address": { 00:18:10.770 "trtype": "TCP", 00:18:10.770 "adrfam": "IPv4", 00:18:10.770 "traddr": "10.0.0.1", 00:18:10.770 "trsvcid": "43642" 00:18:10.770 }, 00:18:10.770 "auth": { 00:18:10.770 "state": "completed", 00:18:10.770 "digest": "sha512", 00:18:10.770 "dhgroup": "ffdhe8192" 00:18:10.770 } 00:18:10.770 } 00:18:10.770 ]' 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.770 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.029 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.598 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.167 00:18:12.167 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.167 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.167 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.426 { 00:18:12.426 "cntlid": 139, 00:18:12.426 "qid": 0, 00:18:12.426 "state": "enabled", 00:18:12.426 "thread": "nvmf_tgt_poll_group_000", 00:18:12.426 "listen_address": { 00:18:12.426 "trtype": "TCP", 00:18:12.426 "adrfam": "IPv4", 00:18:12.426 "traddr": "10.0.0.2", 00:18:12.426 "trsvcid": "4420" 00:18:12.426 }, 00:18:12.426 "peer_address": { 00:18:12.426 "trtype": "TCP", 00:18:12.426 "adrfam": "IPv4", 00:18:12.426 "traddr": "10.0.0.1", 00:18:12.426 "trsvcid": "43672" 00:18:12.426 }, 00:18:12.426 "auth": { 00:18:12.426 "state": "completed", 00:18:12.426 "digest": "sha512", 00:18:12.426 "dhgroup": "ffdhe8192" 00:18:12.426 } 00:18:12.426 } 00:18:12.426 ]' 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.426 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.685 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZDQzZTViY2E5ZGMwYTk0ZTIyMDgzNTAzYTNhMjU5MTUqssYm: --dhchap-ctrl-secret DHHC-1:02:NTJlNmVhODYwYWRjNjMwZWIzM2YwYTgxNjRhODc0MWY4ZGViMWYyM2U4MDI4M2Q10rtSAw==: 00:18:13.253 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.253 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:13.253 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.253 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.253 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.253 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.254 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.823 00:18:13.823 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.823 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.823 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.082 { 00:18:14.082 "cntlid": 141, 00:18:14.082 "qid": 0, 00:18:14.082 "state": "enabled", 00:18:14.082 "thread": "nvmf_tgt_poll_group_000", 00:18:14.082 "listen_address": { 00:18:14.082 "trtype": "TCP", 00:18:14.082 "adrfam": "IPv4", 00:18:14.082 "traddr": "10.0.0.2", 00:18:14.082 "trsvcid": "4420" 00:18:14.082 }, 00:18:14.082 "peer_address": { 00:18:14.082 "trtype": "TCP", 00:18:14.082 "adrfam": "IPv4", 00:18:14.082 "traddr": "10.0.0.1", 00:18:14.082 "trsvcid": "43680" 00:18:14.082 }, 00:18:14.082 "auth": { 00:18:14.082 "state": "completed", 00:18:14.082 "digest": "sha512", 00:18:14.082 "dhgroup": "ffdhe8192" 00:18:14.082 } 00:18:14.082 } 00:18:14.082 ]' 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.082 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.083 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.342 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:MmE4ZTU1OWU3MDZjNWUyZmE0MTZkYzg2MWZmZjZkYjFjYjNlZjcxZTk1OWJmYTAwoLxyXg==: --dhchap-ctrl-secret DHHC-1:01:MGE1MjlhZTJhNTA2ZTAzYmQxZjc2NjlhZTU3NGE5MDgFpX2a: 00:18:14.910 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.910 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:14.910 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.910 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.910 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.910 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.910 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.910 19:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.169 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:15.169 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.169 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.169 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:15.169 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:15.169 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.169 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:15.169 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.169 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.169 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.169 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.169 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.428 00:18:15.428 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.428 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.428 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.688 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.688 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.688 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.688 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.688 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.688 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.688 { 00:18:15.688 "cntlid": 143, 00:18:15.688 "qid": 0, 00:18:15.688 "state": "enabled", 00:18:15.688 "thread": "nvmf_tgt_poll_group_000", 00:18:15.688 "listen_address": { 00:18:15.688 "trtype": "TCP", 00:18:15.688 "adrfam": "IPv4", 00:18:15.688 "traddr": "10.0.0.2", 00:18:15.688 "trsvcid": "4420" 00:18:15.688 }, 00:18:15.688 "peer_address": { 00:18:15.688 "trtype": "TCP", 00:18:15.688 "adrfam": "IPv4", 00:18:15.688 "traddr": "10.0.0.1", 00:18:15.688 "trsvcid": "43696" 00:18:15.688 }, 00:18:15.688 "auth": { 00:18:15.688 "state": "completed", 00:18:15.688 "digest": "sha512", 00:18:15.688 "dhgroup": "ffdhe8192" 00:18:15.688 } 00:18:15.688 } 00:18:15.688 ]' 00:18:15.688 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.688 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.688 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.688 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.688 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.946 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.946 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.946 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.946 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:18:16.513 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.513 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:16.513 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.513 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.513 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.513 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:16.513 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:16.513 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:16.513 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:16.513 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:16.513 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:16.771 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:16.771 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.771 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:16.771 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:16.771 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:16.771 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.771 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.771 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.771 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.771 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.771 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.771 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.338 00:18:17.338 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.338 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.338 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.338 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.338 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.338 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.338 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.338 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.338 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.338 { 00:18:17.338 "cntlid": 145, 00:18:17.338 "qid": 0, 00:18:17.338 "state": "enabled", 00:18:17.338 "thread": "nvmf_tgt_poll_group_000", 00:18:17.338 "listen_address": { 00:18:17.338 "trtype": "TCP", 00:18:17.338 "adrfam": "IPv4", 00:18:17.338 "traddr": "10.0.0.2", 00:18:17.338 "trsvcid": "4420" 00:18:17.338 }, 00:18:17.338 "peer_address": { 00:18:17.338 "trtype": "TCP", 00:18:17.338 "adrfam": "IPv4", 00:18:17.338 "traddr": "10.0.0.1", 00:18:17.338 "trsvcid": "43730" 00:18:17.338 }, 00:18:17.338 "auth": { 00:18:17.338 "state": "completed", 00:18:17.338 "digest": "sha512", 00:18:17.338 "dhgroup": "ffdhe8192" 00:18:17.338 } 00:18:17.338 } 00:18:17.338 ]' 00:18:17.338 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.597 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.597 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.597 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.597 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.597 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.597 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.597 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.856 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2JhZWU2YzMwNjNhNWNiMzk1MTZmYWUyNTJiNzE2ODk3ZTRhNDk2N2Y3NzZiMDgzJkG/Xw==: --dhchap-ctrl-secret DHHC-1:03:MGU4MmYyZTc0ZGM3MmE4NTIzYjBiZDQyNmYwYjhjNGUyODdmMzQ3MWEzYzAxZTlmMzAzODA1MDY1Mzk1ZjhmY/3k1VU=: 00:18:18.424 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.424 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:18.424 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.424 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.424 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.424 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:18.424 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.424 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.424 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.424 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:18.425 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:18.425 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:18.425 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:18.425 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.425 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:18.425 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.425 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:18.425 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:18.684 request: 00:18:18.684 { 00:18:18.684 "name": "nvme0", 00:18:18.684 "trtype": "tcp", 00:18:18.684 "traddr": "10.0.0.2", 00:18:18.684 "adrfam": "ipv4", 00:18:18.684 "trsvcid": "4420", 00:18:18.684 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:18.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:18:18.684 "prchk_reftag": false, 00:18:18.684 "prchk_guard": false, 00:18:18.684 "hdgst": false, 00:18:18.684 "ddgst": false, 00:18:18.684 "dhchap_key": "key2", 00:18:18.684 "method": "bdev_nvme_attach_controller", 00:18:18.684 "req_id": 1 00:18:18.684 } 00:18:18.684 Got JSON-RPC error response 00:18:18.684 response: 00:18:18.684 { 00:18:18.684 "code": -5, 00:18:18.684 "message": "Input/output error" 00:18:18.684 } 00:18:18.684 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:18.684 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:18.684 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:18.684 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:18.684 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:18.684 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.684 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.684 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.684 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.684 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.684 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.685 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.685 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:18.685 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:18.685 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:18.685 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:18.685 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.685 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:18.685 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.685 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:18.685 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.253 request: 00:18:19.253 { 00:18:19.253 "name": "nvme0", 00:18:19.253 "trtype": "tcp", 00:18:19.253 "traddr": "10.0.0.2", 00:18:19.253 "adrfam": "ipv4", 00:18:19.254 "trsvcid": "4420", 00:18:19.254 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:19.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:18:19.254 "prchk_reftag": false, 00:18:19.254 "prchk_guard": false, 00:18:19.254 "hdgst": false, 00:18:19.254 "ddgst": false, 00:18:19.254 "dhchap_key": "key1", 00:18:19.254 "dhchap_ctrlr_key": "ckey2", 00:18:19.254 "method": "bdev_nvme_attach_controller", 00:18:19.254 "req_id": 1 00:18:19.254 } 00:18:19.254 Got JSON-RPC error response 00:18:19.254 response: 00:18:19.254 { 00:18:19.254 "code": -5, 00:18:19.254 "message": "Input/output error" 00:18:19.254 } 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.254 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.513 request: 00:18:19.513 { 00:18:19.513 "name": "nvme0", 00:18:19.513 "trtype": "tcp", 00:18:19.513 "traddr": "10.0.0.2", 00:18:19.513 "adrfam": "ipv4", 00:18:19.513 "trsvcid": "4420", 00:18:19.513 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:19.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:18:19.513 "prchk_reftag": false, 00:18:19.513 "prchk_guard": false, 00:18:19.513 "hdgst": false, 00:18:19.513 "ddgst": false, 00:18:19.513 "dhchap_key": "key1", 00:18:19.513 "dhchap_ctrlr_key": "ckey1", 00:18:19.513 "method": "bdev_nvme_attach_controller", 00:18:19.513 "req_id": 1 00:18:19.513 } 00:18:19.513 Got JSON-RPC error response 00:18:19.513 response: 00:18:19.513 { 00:18:19.513 "code": -5, 00:18:19.513 "message": "Input/output error" 00:18:19.513 } 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1518192 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1518192 ']' 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1518192 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.513 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1518192 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1518192' 00:18:19.775 killing process with pid 1518192 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1518192 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1518192 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1539252 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1539252 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1539252 ']' 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.775 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1539252 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1539252 ']' 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.712 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.971 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.971 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:20.971 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:20.971 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.971 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.972 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.539 00:18:21.539 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.539 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.539 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.799 { 00:18:21.799 "cntlid": 1, 00:18:21.799 "qid": 0, 00:18:21.799 "state": "enabled", 00:18:21.799 "thread": "nvmf_tgt_poll_group_000", 00:18:21.799 "listen_address": { 00:18:21.799 "trtype": "TCP", 00:18:21.799 "adrfam": "IPv4", 00:18:21.799 "traddr": "10.0.0.2", 00:18:21.799 "trsvcid": "4420" 00:18:21.799 }, 00:18:21.799 "peer_address": { 00:18:21.799 "trtype": "TCP", 00:18:21.799 "adrfam": "IPv4", 00:18:21.799 "traddr": "10.0.0.1", 00:18:21.799 "trsvcid": "47306" 00:18:21.799 }, 00:18:21.799 "auth": { 00:18:21.799 "state": "completed", 00:18:21.799 "digest": "sha512", 00:18:21.799 "dhgroup": "ffdhe8192" 00:18:21.799 } 00:18:21.799 } 00:18:21.799 ]' 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.799 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.123 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YTMxZWI2MmFiYTZhMjUwNWIzZDFjN2QxYjRiY2EzODE1ZmI2NTM2MTVkMDlmM2IzZGMwZWQ1ZTUyNDA2MzEwZqZhPb0=: 00:18:22.691 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.691 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:22.691 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.691 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.691 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.692 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.951 request: 00:18:22.951 { 00:18:22.951 "name": "nvme0", 00:18:22.951 "trtype": "tcp", 00:18:22.951 "traddr": "10.0.0.2", 00:18:22.951 "adrfam": "ipv4", 00:18:22.951 "trsvcid": "4420", 00:18:22.951 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:18:22.951 "prchk_reftag": false, 00:18:22.951 "prchk_guard": false, 00:18:22.951 "hdgst": false, 00:18:22.951 "ddgst": false, 00:18:22.951 "dhchap_key": "key3", 00:18:22.951 "method": "bdev_nvme_attach_controller", 00:18:22.951 "req_id": 1 00:18:22.951 } 00:18:22.951 Got JSON-RPC error response 00:18:22.951 response: 00:18:22.951 { 00:18:22.951 "code": -5, 00:18:22.951 "message": "Input/output error" 00:18:22.951 } 00:18:22.951 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:22.951 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.951 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.951 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.951 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:22.951 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:22.951 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:22.951 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:23.211 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.211 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:23.211 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.211 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:23.211 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.211 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:23.211 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.211 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.211 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.211 request: 00:18:23.211 { 00:18:23.211 "name": "nvme0", 00:18:23.211 "trtype": "tcp", 00:18:23.211 "traddr": "10.0.0.2", 00:18:23.211 "adrfam": "ipv4", 00:18:23.211 "trsvcid": "4420", 00:18:23.212 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:18:23.212 "prchk_reftag": false, 00:18:23.212 "prchk_guard": false, 00:18:23.212 "hdgst": false, 00:18:23.212 "ddgst": false, 00:18:23.212 "dhchap_key": "key3", 00:18:23.212 "method": "bdev_nvme_attach_controller", 00:18:23.212 "req_id": 1 00:18:23.212 } 00:18:23.212 Got JSON-RPC error response 00:18:23.212 response: 00:18:23.212 { 00:18:23.212 "code": -5, 00:18:23.212 "message": "Input/output error" 00:18:23.212 } 00:18:23.212 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:23.212 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:23.212 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:23.212 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:23.212 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:23.212 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:23.212 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:23.212 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.212 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.212 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:23.471 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:23.730 request: 00:18:23.730 { 00:18:23.730 "name": "nvme0", 00:18:23.730 "trtype": "tcp", 00:18:23.730 "traddr": "10.0.0.2", 00:18:23.730 "adrfam": "ipv4", 00:18:23.730 "trsvcid": "4420", 00:18:23.730 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:18:23.730 "prchk_reftag": false, 00:18:23.730 "prchk_guard": false, 00:18:23.730 "hdgst": false, 00:18:23.730 "ddgst": false, 00:18:23.730 "dhchap_key": "key0", 00:18:23.730 "dhchap_ctrlr_key": "key1", 00:18:23.730 "method": "bdev_nvme_attach_controller", 00:18:23.730 "req_id": 1 00:18:23.730 } 00:18:23.730 Got JSON-RPC error response 00:18:23.730 response: 00:18:23.730 { 00:18:23.730 "code": -5, 00:18:23.730 "message": "Input/output error" 00:18:23.730 } 00:18:23.730 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:23.730 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:23.730 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:23.730 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:23.730 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:23.731 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:23.990 00:18:23.990 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:23.990 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:23.990 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.990 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.990 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.990 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1518364 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1518364 ']' 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1518364 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1518364 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1518364' 00:18:24.250 killing process with pid 1518364 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1518364 00:18:24.250 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1518364 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:24.819 rmmod nvme_tcp 00:18:24.819 rmmod nvme_fabrics 00:18:24.819 rmmod nvme_keyring 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1539252 ']' 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1539252 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1539252 ']' 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1539252 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1539252 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1539252' 00:18:24.819 killing process with pid 1539252 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1539252 00:18:24.819 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1539252 00:18:25.079 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:25.079 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:25.079 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:25.079 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:25.079 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:25.079 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.079 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.079 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.987 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.987 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.aPX /tmp/spdk.key-sha256.W7c /tmp/spdk.key-sha384.ZdN /tmp/spdk.key-sha512.9sT /tmp/spdk.key-sha512.1VY /tmp/spdk.key-sha384.wQ4 /tmp/spdk.key-sha256.bhh '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:26.987 00:18:26.987 real 2m9.961s 00:18:26.987 user 4m49.256s 00:18:26.987 sys 0m28.988s 00:18:26.987 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.987 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.987 ************************************ 00:18:26.987 END TEST nvmf_auth_target 00:18:26.987 ************************************ 00:18:26.987 19:19:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:26.987 19:19:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:26.987 19:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:26.987 19:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.987 19:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:27.247 ************************************ 00:18:27.247 START TEST nvmf_bdevio_no_huge 00:18:27.247 ************************************ 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:27.247 * Looking for test storage... 00:18:27.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.247 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:27.248 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:27.248 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:27.248 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:35.374 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:35.375 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:35.375 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:35.375 Found net devices under 0000:af:00.0: cvl_0_0 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:35.375 Found net devices under 0000:af:00.1: cvl_0_1 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:35.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:18:35.375 00:18:35.375 --- 10.0.0.2 ping statistics --- 00:18:35.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.375 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:35.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:18:35.375 00:18:35.375 --- 10.0.0.1 ping statistics --- 00:18:35.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.375 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1543941 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1543941 00:18:35.375 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1543941 ']' 00:18:35.376 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.376 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:35.376 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.376 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:35.376 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 [2024-07-24 19:19:20.526273] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:18:35.376 [2024-07-24 19:19:20.526324] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:35.376 [2024-07-24 19:19:20.606440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:35.376 [2024-07-24 19:19:20.705402] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.376 [2024-07-24 19:19:20.705444] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.376 [2024-07-24 19:19:20.705454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.376 [2024-07-24 19:19:20.705462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.376 [2024-07-24 19:19:20.705469] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.376 [2024-07-24 19:19:20.705587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:35.376 [2024-07-24 19:19:20.705700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:35.376 [2024-07-24 19:19:20.705809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:35.376 [2024-07-24 19:19:20.705810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 [2024-07-24 19:19:21.382031] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 Malloc0 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.376 [2024-07-24 19:19:21.418669] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:35.376 { 00:18:35.376 "params": { 00:18:35.376 "name": "Nvme$subsystem", 00:18:35.376 "trtype": "$TEST_TRANSPORT", 00:18:35.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:35.376 "adrfam": "ipv4", 00:18:35.376 "trsvcid": "$NVMF_PORT", 00:18:35.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:35.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:35.376 "hdgst": ${hdgst:-false}, 00:18:35.376 "ddgst": ${ddgst:-false} 00:18:35.376 }, 00:18:35.376 "method": "bdev_nvme_attach_controller" 00:18:35.376 } 00:18:35.376 EOF 00:18:35.376 )") 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:35.376 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:35.376 "params": { 00:18:35.376 "name": "Nvme1", 00:18:35.376 "trtype": "tcp", 00:18:35.376 "traddr": "10.0.0.2", 00:18:35.376 "adrfam": "ipv4", 00:18:35.376 "trsvcid": "4420", 00:18:35.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.376 "hdgst": false, 00:18:35.376 "ddgst": false 00:18:35.376 }, 00:18:35.376 "method": "bdev_nvme_attach_controller" 00:18:35.376 }' 00:18:35.376 [2024-07-24 19:19:21.470310] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:18:35.376 [2024-07-24 19:19:21.470360] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1543980 ] 00:18:35.376 [2024-07-24 19:19:21.545230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:35.635 [2024-07-24 19:19:21.646032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.635 [2024-07-24 19:19:21.646128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.635 [2024-07-24 19:19:21.646128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.635 I/O targets: 00:18:35.636 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:35.636 00:18:35.636 00:18:35.636 CUnit - A unit testing framework for C - Version 2.1-3 00:18:35.636 http://cunit.sourceforge.net/ 00:18:35.636 00:18:35.636 00:18:35.636 Suite: bdevio tests on: Nvme1n1 00:18:35.636 Test: blockdev write read block ...passed 00:18:35.895 Test: blockdev write zeroes read block ...passed 00:18:35.895 Test: blockdev write zeroes read no split ...passed 00:18:35.895 Test: blockdev write zeroes read split ...passed 00:18:35.895 Test: blockdev write zeroes read split partial ...passed 00:18:35.895 Test: blockdev reset ...[2024-07-24 19:19:22.024038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.895 [2024-07-24 19:19:22.024101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97a670 (9): Bad file descriptor 00:18:36.155 [2024-07-24 19:19:22.134702] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:36.155 passed 00:18:36.155 Test: blockdev write read 8 blocks ...passed 00:18:36.155 Test: blockdev write read size > 128k ...passed 00:18:36.155 Test: blockdev write read invalid size ...passed 00:18:36.155 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:36.155 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:36.155 Test: blockdev write read max offset ...passed 00:18:36.155 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:36.155 Test: blockdev writev readv 8 blocks ...passed 00:18:36.155 Test: blockdev writev readv 30 x 1block ...passed 00:18:36.155 Test: blockdev writev readv block ...passed 00:18:36.155 Test: blockdev writev readv size > 128k ...passed 00:18:36.155 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:36.155 Test: blockdev comparev and writev ...[2024-07-24 19:19:22.386525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.155 [2024-07-24 19:19:22.386555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.155 [2024-07-24 19:19:22.386571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.155 [2024-07-24 19:19:22.386582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:36.155 [2024-07-24 19:19:22.386841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.155 [2024-07-24 19:19:22.386854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:36.155 [2024-07-24 19:19:22.386868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.155 [2024-07-24 19:19:22.386877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:36.155 [2024-07-24 19:19:22.387126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.155 [2024-07-24 19:19:22.387141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:36.155 [2024-07-24 19:19:22.387155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.155 [2024-07-24 19:19:22.387166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:36.155 [2024-07-24 19:19:22.387413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.155 [2024-07-24 19:19:22.387425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:36.155 [2024-07-24 19:19:22.387439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.155 [2024-07-24 19:19:22.387448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:36.414 passed 00:18:36.414 Test: blockdev nvme passthru rw ...passed 00:18:36.414 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:19:22.469015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:36.414 [2024-07-24 19:19:22.469032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:36.414 [2024-07-24 19:19:22.469168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:36.414 [2024-07-24 19:19:22.469180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:36.414 [2024-07-24 19:19:22.469301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:36.414 [2024-07-24 19:19:22.469313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:36.414 [2024-07-24 19:19:22.469430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:36.414 [2024-07-24 19:19:22.469442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:36.414 passed 00:18:36.414 Test: blockdev nvme admin passthru ...passed 00:18:36.414 Test: blockdev copy ...passed 00:18:36.414 00:18:36.414 Run Summary: Type Total Ran Passed Failed Inactive 00:18:36.414 suites 1 1 n/a 0 0 00:18:36.414 tests 23 23 23 0 0 00:18:36.414 asserts 152 152 152 0 n/a 00:18:36.414 00:18:36.414 Elapsed time = 1.442 seconds 00:18:36.672 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.672 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.672 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:36.672 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.672 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:36.673 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:36.673 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:36.673 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:36.673 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.673 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:36.673 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.673 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.673 rmmod nvme_tcp 00:18:36.673 rmmod nvme_fabrics 00:18:36.673 rmmod nvme_keyring 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1543941 ']' 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1543941 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1543941 ']' 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1543941 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1543941 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1543941' 00:18:36.931 killing process with pid 1543941 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1543941 00:18:36.931 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1543941 00:18:37.190 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:37.190 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:37.190 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:37.190 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.190 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.190 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.190 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.190 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.721 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:39.721 00:18:39.721 real 0m12.179s 00:18:39.721 user 0m14.420s 00:18:39.721 sys 0m6.556s 00:18:39.721 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:39.721 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.721 ************************************ 00:18:39.721 END TEST nvmf_bdevio_no_huge 00:18:39.721 ************************************ 00:18:39.721 19:19:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:39.721 19:19:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:39.721 19:19:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:39.721 19:19:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:39.721 ************************************ 00:18:39.721 START TEST nvmf_tls 00:18:39.721 ************************************ 00:18:39.721 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:39.721 * Looking for test storage... 00:18:39.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.721 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.721 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:39.721 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.721 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:39.722 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:46.316 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:46.316 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:46.316 Found net devices under 0000:af:00.0: cvl_0_0 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:46.316 Found net devices under 0000:af:00.1: cvl_0_1 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:46.316 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.316 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.316 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.316 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:46.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:18:46.316 00:18:46.316 --- 10.0.0.2 ping statistics --- 00:18:46.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.317 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:18:46.317 00:18:46.317 --- 10.0.0.1 ping statistics --- 00:18:46.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.317 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1547926 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1547926 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1547926 ']' 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:46.317 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.317 [2024-07-24 19:19:32.153541] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:18:46.317 [2024-07-24 19:19:32.153592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.317 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.317 [2024-07-24 19:19:32.227153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.317 [2024-07-24 19:19:32.298668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.317 [2024-07-24 19:19:32.298708] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.317 [2024-07-24 19:19:32.298722] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.317 [2024-07-24 19:19:32.298730] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.317 [2024-07-24 19:19:32.298738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.317 [2024-07-24 19:19:32.298775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.886 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:46.886 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:46.886 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:46.886 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:46.886 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.886 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.886 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:46.886 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:47.145 true 00:18:47.145 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:47.145 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:47.145 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:47.145 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:47.145 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:47.404 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:47.404 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:47.663 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:47.663 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:47.663 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:47.663 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:47.663 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:47.922 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:47.922 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:47.922 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:47.922 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:47.922 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:47.922 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:47.922 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:48.181 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.181 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:48.440 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:48.440 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:48.440 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:48.440 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.440 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:48.698 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:48.956 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.3SMnuh7Gi7 00:18:48.956 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:48.956 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.DXWvmcdUYX 00:18:48.956 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:48.956 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:48.956 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.3SMnuh7Gi7 00:18:48.956 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.DXWvmcdUYX 00:18:48.956 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:48.956 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:49.216 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.3SMnuh7Gi7 00:18:49.216 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3SMnuh7Gi7 00:18:49.216 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:49.474 [2024-07-24 19:19:35.503325] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.474 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:49.474 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:49.733 [2024-07-24 19:19:35.836169] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.733 [2024-07-24 19:19:35.836352] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.733 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:49.992 malloc0 00:18:49.992 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:49.992 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3SMnuh7Gi7 00:18:50.252 [2024-07-24 19:19:36.345709] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:50.252 19:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3SMnuh7Gi7 00:18:50.252 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.236 Initializing NVMe Controllers 00:19:00.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:00.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:00.236 Initialization complete. Launching workers. 00:19:00.236 ======================================================== 00:19:00.236 Latency(us) 00:19:00.236 Device Information : IOPS MiB/s Average min max 00:19:00.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16378.38 63.98 3908.05 723.51 5520.38 00:19:00.236 ======================================================== 00:19:00.236 Total : 16378.38 63.98 3908.05 723.51 5520.38 00:19:00.236 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3SMnuh7Gi7 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3SMnuh7Gi7' 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1550352 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1550352 /var/tmp/bdevperf.sock 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1550352 ']' 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.495 19:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.495 [2024-07-24 19:19:46.527865] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:00.495 [2024-07-24 19:19:46.527919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550352 ] 00:19:00.495 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.495 [2024-07-24 19:19:46.593252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.495 [2024-07-24 19:19:46.660460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.432 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.432 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:01.432 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3SMnuh7Gi7 00:19:01.432 [2024-07-24 19:19:47.483591] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.432 [2024-07-24 19:19:47.483690] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:01.432 TLSTESTn1 00:19:01.432 19:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:01.432 Running I/O for 10 seconds... 00:19:13.646 00:19:13.646 Latency(us) 00:19:13.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.646 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:13.646 Verification LBA range: start 0x0 length 0x2000 00:19:13.646 TLSTESTn1 : 10.02 5452.14 21.30 0.00 0.00 23434.23 6448.74 59139.69 00:19:13.646 =================================================================================================================== 00:19:13.646 Total : 5452.14 21.30 0.00 0.00 23434.23 6448.74 59139.69 00:19:13.646 0 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1550352 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1550352 ']' 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1550352 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1550352 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1550352' 00:19:13.646 killing process with pid 1550352 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1550352 00:19:13.646 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.646 00:19:13.646 Latency(us) 00:19:13.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.646 =================================================================================================================== 00:19:13.646 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.646 [2024-07-24 19:19:57.764936] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1550352 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DXWvmcdUYX 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DXWvmcdUYX 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DXWvmcdUYX 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.DXWvmcdUYX' 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1552231 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1552231 /var/tmp/bdevperf.sock 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1552231 ']' 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.646 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:13.647 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.647 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:13.647 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.647 [2024-07-24 19:19:57.996807] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:13.647 [2024-07-24 19:19:57.996860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1552231 ] 00:19:13.647 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.647 [2024-07-24 19:19:58.064602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.647 [2024-07-24 19:19:58.132421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.647 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:13.647 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:13.647 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DXWvmcdUYX 00:19:13.647 [2024-07-24 19:19:58.933947] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.647 [2024-07-24 19:19:58.934026] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:13.647 [2024-07-24 19:19:58.938629] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:13.647 [2024-07-24 19:19:58.939278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa875e0 (107): Transport endpoint is not connected 00:19:13.647 [2024-07-24 19:19:58.940269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa875e0 (9): Bad file descriptor 00:19:13.647 [2024-07-24 19:19:58.941271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:13.647 [2024-07-24 19:19:58.941282] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:13.647 [2024-07-24 19:19:58.941293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:13.647 request: 00:19:13.647 { 00:19:13.647 "name": "TLSTEST", 00:19:13.647 "trtype": "tcp", 00:19:13.647 "traddr": "10.0.0.2", 00:19:13.647 "adrfam": "ipv4", 00:19:13.647 "trsvcid": "4420", 00:19:13.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.647 "prchk_reftag": false, 00:19:13.647 "prchk_guard": false, 00:19:13.647 "hdgst": false, 00:19:13.647 "ddgst": false, 00:19:13.647 "psk": "/tmp/tmp.DXWvmcdUYX", 00:19:13.647 "method": "bdev_nvme_attach_controller", 00:19:13.647 "req_id": 1 00:19:13.647 } 00:19:13.647 Got JSON-RPC error response 00:19:13.647 response: 00:19:13.647 { 00:19:13.647 "code": -5, 00:19:13.647 "message": "Input/output error" 00:19:13.647 } 00:19:13.647 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1552231 00:19:13.647 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1552231 ']' 00:19:13.647 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1552231 00:19:13.647 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:13.647 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.647 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1552231 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1552231' 00:19:13.647 killing process with pid 1552231 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1552231 00:19:13.647 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.647 00:19:13.647 Latency(us) 00:19:13.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.647 =================================================================================================================== 00:19:13.647 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.647 [2024-07-24 19:19:59.010603] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1552231 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3SMnuh7Gi7 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3SMnuh7Gi7 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3SMnuh7Gi7 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3SMnuh7Gi7' 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1552462 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1552462 /var/tmp/bdevperf.sock 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1552462 ']' 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:13.647 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.647 [2024-07-24 19:19:59.232327] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:13.647 [2024-07-24 19:19:59.232380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1552462 ] 00:19:13.647 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.647 [2024-07-24 19:19:59.298585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.647 [2024-07-24 19:19:59.366461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.907 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:13.907 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:13.907 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.3SMnuh7Gi7 00:19:14.167 [2024-07-24 19:20:00.202025] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.167 [2024-07-24 19:20:00.202109] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:14.167 [2024-07-24 19:20:00.210957] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:14.167 [2024-07-24 19:20:00.210980] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:14.167 [2024-07-24 19:20:00.211008] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:14.167 [2024-07-24 19:20:00.211360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaf5e0 (107): Transport endpoint is not connected 00:19:14.167 [2024-07-24 19:20:00.212352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaf5e0 (9): Bad file descriptor 00:19:14.167 [2024-07-24 19:20:00.213354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:14.167 [2024-07-24 19:20:00.213366] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:14.167 [2024-07-24 19:20:00.213377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:14.167 request: 00:19:14.167 { 00:19:14.167 "name": "TLSTEST", 00:19:14.167 "trtype": "tcp", 00:19:14.167 "traddr": "10.0.0.2", 00:19:14.167 "adrfam": "ipv4", 00:19:14.167 "trsvcid": "4420", 00:19:14.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.167 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:14.167 "prchk_reftag": false, 00:19:14.167 "prchk_guard": false, 00:19:14.167 "hdgst": false, 00:19:14.167 "ddgst": false, 00:19:14.167 "psk": "/tmp/tmp.3SMnuh7Gi7", 00:19:14.167 "method": "bdev_nvme_attach_controller", 00:19:14.167 "req_id": 1 00:19:14.167 } 00:19:14.167 Got JSON-RPC error response 00:19:14.167 response: 00:19:14.167 { 00:19:14.167 "code": -5, 00:19:14.167 "message": "Input/output error" 00:19:14.167 } 00:19:14.167 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1552462 00:19:14.167 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1552462 ']' 00:19:14.167 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1552462 00:19:14.167 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:14.167 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.167 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1552462 00:19:14.167 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:14.167 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:14.167 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1552462' 00:19:14.167 killing process with pid 1552462 00:19:14.167 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1552462 00:19:14.167 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.167 00:19:14.167 Latency(us) 00:19:14.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.167 =================================================================================================================== 00:19:14.167 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.167 [2024-07-24 19:20:00.295006] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:14.167 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1552462 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3SMnuh7Gi7 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3SMnuh7Gi7 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3SMnuh7Gi7 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3SMnuh7Gi7' 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1552734 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1552734 /var/tmp/bdevperf.sock 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1552734 ']' 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.520 19:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.520 [2024-07-24 19:20:00.518371] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:14.520 [2024-07-24 19:20:00.518421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1552734 ] 00:19:14.520 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.520 [2024-07-24 19:20:00.583705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.520 [2024-07-24 19:20:00.647039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.088 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.088 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:15.088 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3SMnuh7Gi7 00:19:15.347 [2024-07-24 19:20:01.473974] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.347 [2024-07-24 19:20:01.474060] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:15.347 [2024-07-24 19:20:01.484183] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:15.347 [2024-07-24 19:20:01.484208] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:15.347 [2024-07-24 19:20:01.484234] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:15.347 [2024-07-24 19:20:01.484391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5845e0 (107): Transport endpoint is not connected 00:19:15.347 [2024-07-24 19:20:01.485384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5845e0 (9): Bad file descriptor 00:19:15.347 [2024-07-24 19:20:01.486388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:15.347 [2024-07-24 19:20:01.486401] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:15.347 [2024-07-24 19:20:01.486414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:15.347 request: 00:19:15.347 { 00:19:15.347 "name": "TLSTEST", 00:19:15.347 "trtype": "tcp", 00:19:15.347 "traddr": "10.0.0.2", 00:19:15.347 "adrfam": "ipv4", 00:19:15.347 "trsvcid": "4420", 00:19:15.347 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:15.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.347 "prchk_reftag": false, 00:19:15.347 "prchk_guard": false, 00:19:15.347 "hdgst": false, 00:19:15.347 "ddgst": false, 00:19:15.347 "psk": "/tmp/tmp.3SMnuh7Gi7", 00:19:15.347 "method": "bdev_nvme_attach_controller", 00:19:15.347 "req_id": 1 00:19:15.347 } 00:19:15.347 Got JSON-RPC error response 00:19:15.347 response: 00:19:15.347 { 00:19:15.347 "code": -5, 00:19:15.347 "message": "Input/output error" 00:19:15.347 } 00:19:15.347 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1552734 00:19:15.347 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1552734 ']' 00:19:15.347 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1552734 00:19:15.347 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:15.347 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.347 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1552734 00:19:15.347 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:15.347 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:15.347 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1552734' 00:19:15.347 killing process with pid 1552734 00:19:15.347 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1552734 00:19:15.347 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.347 00:19:15.347 Latency(us) 00:19:15.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.347 =================================================================================================================== 00:19:15.347 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.347 [2024-07-24 19:20:01.565087] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:15.347 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1552734 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1553004 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1553004 /var/tmp/bdevperf.sock 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1553004 ']' 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.607 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.607 [2024-07-24 19:20:01.786321] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:15.607 [2024-07-24 19:20:01.786373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553004 ] 00:19:15.607 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.866 [2024-07-24 19:20:01.853035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.866 [2024-07-24 19:20:01.917013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.433 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:16.433 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:16.433 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:16.693 [2024-07-24 19:20:02.757409] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:16.693 [2024-07-24 19:20:02.759134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f9b50 (9): Bad file descriptor 00:19:16.693 [2024-07-24 19:20:02.760131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:16.693 [2024-07-24 19:20:02.760145] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:16.693 [2024-07-24 19:20:02.760156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:16.693 request: 00:19:16.693 { 00:19:16.693 "name": "TLSTEST", 00:19:16.693 "trtype": "tcp", 00:19:16.693 "traddr": "10.0.0.2", 00:19:16.693 "adrfam": "ipv4", 00:19:16.693 "trsvcid": "4420", 00:19:16.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.693 "prchk_reftag": false, 00:19:16.693 "prchk_guard": false, 00:19:16.693 "hdgst": false, 00:19:16.693 "ddgst": false, 00:19:16.693 "method": "bdev_nvme_attach_controller", 00:19:16.693 "req_id": 1 00:19:16.693 } 00:19:16.693 Got JSON-RPC error response 00:19:16.693 response: 00:19:16.693 { 00:19:16.693 "code": -5, 00:19:16.693 "message": "Input/output error" 00:19:16.693 } 00:19:16.693 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1553004 00:19:16.693 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1553004 ']' 00:19:16.693 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1553004 00:19:16.693 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:16.693 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.693 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1553004 00:19:16.693 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:16.693 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:16.693 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1553004' 00:19:16.693 killing process with pid 1553004 00:19:16.693 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1553004 00:19:16.693 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.693 00:19:16.693 Latency(us) 00:19:16.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.693 =================================================================================================================== 00:19:16.693 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:16.693 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1553004 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1547926 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1547926 ']' 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1547926 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1547926 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1547926' 00:19:16.952 killing process with pid 1547926 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1547926 00:19:16.952 [2024-07-24 19:20:03.062964] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:16.952 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1547926 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.pO6GwbkNDs 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.pO6GwbkNDs 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1553284 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1553284 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1553284 ']' 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:17.212 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.212 [2024-07-24 19:20:03.365582] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:17.212 [2024-07-24 19:20:03.365642] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.212 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.212 [2024-07-24 19:20:03.439175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.470 [2024-07-24 19:20:03.511551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.471 [2024-07-24 19:20:03.511591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.471 [2024-07-24 19:20:03.511600] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.471 [2024-07-24 19:20:03.511609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.471 [2024-07-24 19:20:03.511616] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.471 [2024-07-24 19:20:03.511643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.038 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.038 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:18.038 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.038 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.038 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.038 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.038 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.pO6GwbkNDs 00:19:18.038 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pO6GwbkNDs 00:19:18.038 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:18.297 [2024-07-24 19:20:04.354737] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.297 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:18.556 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:18.556 [2024-07-24 19:20:04.707635] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.556 [2024-07-24 19:20:04.707829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.556 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:18.816 malloc0 00:19:18.816 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pO6GwbkNDs 00:19:19.075 [2024-07-24 19:20:05.209141] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pO6GwbkNDs 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pO6GwbkNDs' 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1553580 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1553580 /var/tmp/bdevperf.sock 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1553580 ']' 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.075 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.075 [2024-07-24 19:20:05.273559] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:19.075 [2024-07-24 19:20:05.273607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553580 ] 00:19:19.075 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.336 [2024-07-24 19:20:05.338747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.336 [2024-07-24 19:20:05.413680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.905 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.905 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:19.905 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pO6GwbkNDs 00:19:20.164 [2024-07-24 19:20:06.237160] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.165 [2024-07-24 19:20:06.237246] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:20.165 TLSTESTn1 00:19:20.165 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:20.424 Running I/O for 10 seconds... 00:19:30.406 00:19:30.407 Latency(us) 00:19:30.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.407 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:30.407 Verification LBA range: start 0x0 length 0x2000 00:19:30.407 TLSTESTn1 : 10.02 5458.71 21.32 0.00 0.00 23406.02 6710.89 51589.94 00:19:30.407 =================================================================================================================== 00:19:30.407 Total : 5458.71 21.32 0.00 0.00 23406.02 6710.89 51589.94 00:19:30.407 0 00:19:30.407 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:30.407 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1553580 00:19:30.407 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1553580 ']' 00:19:30.407 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1553580 00:19:30.407 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:30.407 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.407 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1553580 00:19:30.407 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:30.407 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:30.407 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1553580' 00:19:30.407 killing process with pid 1553580 00:19:30.407 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1553580 00:19:30.407 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.407 00:19:30.407 Latency(us) 00:19:30.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.407 =================================================================================================================== 00:19:30.407 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.407 [2024-07-24 19:20:16.537977] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:30.407 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1553580 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.pO6GwbkNDs 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pO6GwbkNDs 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pO6GwbkNDs 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pO6GwbkNDs 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pO6GwbkNDs' 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1555431 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1555431 /var/tmp/bdevperf.sock 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1555431 ']' 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.667 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.667 [2024-07-24 19:20:16.772207] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:30.667 [2024-07-24 19:20:16.772260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1555431 ] 00:19:30.667 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.667 [2024-07-24 19:20:16.837177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.667 [2024-07-24 19:20:16.900667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pO6GwbkNDs 00:19:31.605 [2024-07-24 19:20:17.727589] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.605 [2024-07-24 19:20:17.727645] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:31.605 [2024-07-24 19:20:17.727654] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.pO6GwbkNDs 00:19:31.605 request: 00:19:31.605 { 00:19:31.605 "name": "TLSTEST", 00:19:31.605 "trtype": "tcp", 00:19:31.605 "traddr": "10.0.0.2", 00:19:31.605 "adrfam": "ipv4", 00:19:31.605 "trsvcid": "4420", 00:19:31.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.605 "prchk_reftag": false, 00:19:31.605 "prchk_guard": false, 00:19:31.605 "hdgst": false, 00:19:31.605 "ddgst": false, 00:19:31.605 "psk": "/tmp/tmp.pO6GwbkNDs", 00:19:31.605 "method": "bdev_nvme_attach_controller", 00:19:31.605 "req_id": 1 00:19:31.605 } 00:19:31.605 Got JSON-RPC error response 00:19:31.605 response: 00:19:31.605 { 00:19:31.605 "code": -1, 00:19:31.605 "message": "Operation not permitted" 00:19:31.605 } 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1555431 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1555431 ']' 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1555431 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1555431 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1555431' 00:19:31.605 killing process with pid 1555431 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1555431 00:19:31.605 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.605 00:19:31.605 Latency(us) 00:19:31.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.605 =================================================================================================================== 00:19:31.605 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:31.605 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1555431 00:19:31.864 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:31.864 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:31.864 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:31.864 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:31.864 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:31.864 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1553284 00:19:31.864 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1553284 ']' 00:19:31.864 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1553284 00:19:31.864 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:31.864 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.865 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1553284 00:19:31.865 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:31.865 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:31.865 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1553284' 00:19:31.865 killing process with pid 1553284 00:19:31.865 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1553284 00:19:31.865 [2024-07-24 19:20:18.007395] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:31.865 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1553284 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1555708 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1555708 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1555708 ']' 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.124 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.124 [2024-07-24 19:20:18.249562] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:32.124 [2024-07-24 19:20:18.249615] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.124 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.124 [2024-07-24 19:20:18.324490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.384 [2024-07-24 19:20:18.388938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.384 [2024-07-24 19:20:18.388978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.384 [2024-07-24 19:20:18.388987] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.384 [2024-07-24 19:20:18.388995] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.384 [2024-07-24 19:20:18.389002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.384 [2024-07-24 19:20:18.389022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.pO6GwbkNDs 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.pO6GwbkNDs 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.952 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.pO6GwbkNDs 00:19:32.953 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pO6GwbkNDs 00:19:32.953 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:33.211 [2024-07-24 19:20:19.247564] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.211 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:33.211 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:33.470 [2024-07-24 19:20:19.588423] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:33.470 [2024-07-24 19:20:19.588601] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.470 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:33.728 malloc0 00:19:33.728 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:33.729 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pO6GwbkNDs 00:19:33.987 [2024-07-24 19:20:20.069863] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:33.987 [2024-07-24 19:20:20.069898] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:33.987 [2024-07-24 19:20:20.069923] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:33.987 request: 00:19:33.987 { 00:19:33.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.987 "host": "nqn.2016-06.io.spdk:host1", 00:19:33.987 "psk": "/tmp/tmp.pO6GwbkNDs", 00:19:33.987 "method": "nvmf_subsystem_add_host", 00:19:33.987 "req_id": 1 00:19:33.987 } 00:19:33.987 Got JSON-RPC error response 00:19:33.987 response: 00:19:33.987 { 00:19:33.987 "code": -32603, 00:19:33.987 "message": "Internal error" 00:19:33.987 } 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1555708 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1555708 ']' 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1555708 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1555708 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1555708' 00:19:33.987 killing process with pid 1555708 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1555708 00:19:33.987 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1555708 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.pO6GwbkNDs 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1556115 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1556115 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1556115 ']' 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.248 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.248 [2024-07-24 19:20:20.401584] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:34.248 [2024-07-24 19:20:20.401633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.248 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.248 [2024-07-24 19:20:20.477168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.590 [2024-07-24 19:20:20.548791] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.590 [2024-07-24 19:20:20.548831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.590 [2024-07-24 19:20:20.548841] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.590 [2024-07-24 19:20:20.548850] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.590 [2024-07-24 19:20:20.548857] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.590 [2024-07-24 19:20:20.548878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.159 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.159 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:35.159 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:35.159 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.159 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.159 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.159 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.pO6GwbkNDs 00:19:35.159 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pO6GwbkNDs 00:19:35.159 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.418 [2024-07-24 19:20:21.408358] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.418 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.418 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:35.677 [2024-07-24 19:20:21.737230] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.677 [2024-07-24 19:20:21.737420] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.677 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:35.677 malloc0 00:19:35.677 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:35.936 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pO6GwbkNDs 00:19:36.195 [2024-07-24 19:20:22.218782] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:36.195 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.195 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1556548 00:19:36.195 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.195 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1556548 /var/tmp/bdevperf.sock 00:19:36.195 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1556548 ']' 00:19:36.195 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.195 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.195 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.195 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.195 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.195 [2024-07-24 19:20:22.271936] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:36.195 [2024-07-24 19:20:22.271986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556548 ] 00:19:36.195 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.195 [2024-07-24 19:20:22.338100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.195 [2024-07-24 19:20:22.407115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.132 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.132 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:37.132 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pO6GwbkNDs 00:19:37.132 [2024-07-24 19:20:23.225885] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.132 [2024-07-24 19:20:23.225964] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:37.132 TLSTESTn1 00:19:37.132 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:37.391 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:37.391 "subsystems": [ 00:19:37.391 { 00:19:37.391 "subsystem": "keyring", 00:19:37.391 "config": [] 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "subsystem": "iobuf", 00:19:37.391 "config": [ 00:19:37.391 { 00:19:37.391 "method": "iobuf_set_options", 00:19:37.391 "params": { 00:19:37.391 "small_pool_count": 8192, 00:19:37.391 "large_pool_count": 1024, 00:19:37.391 "small_bufsize": 8192, 00:19:37.391 "large_bufsize": 135168 00:19:37.391 } 00:19:37.391 } 00:19:37.391 ] 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "subsystem": "sock", 00:19:37.391 "config": [ 00:19:37.391 { 00:19:37.391 "method": "sock_set_default_impl", 00:19:37.391 "params": { 00:19:37.391 "impl_name": "posix" 00:19:37.391 } 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "method": "sock_impl_set_options", 00:19:37.391 "params": { 00:19:37.391 "impl_name": "ssl", 00:19:37.391 "recv_buf_size": 4096, 00:19:37.391 "send_buf_size": 4096, 00:19:37.391 "enable_recv_pipe": true, 00:19:37.391 "enable_quickack": false, 00:19:37.391 "enable_placement_id": 0, 00:19:37.391 "enable_zerocopy_send_server": true, 00:19:37.391 "enable_zerocopy_send_client": false, 00:19:37.391 "zerocopy_threshold": 0, 00:19:37.391 "tls_version": 0, 00:19:37.391 "enable_ktls": false 00:19:37.391 } 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "method": "sock_impl_set_options", 00:19:37.391 "params": { 00:19:37.391 "impl_name": "posix", 00:19:37.391 "recv_buf_size": 2097152, 00:19:37.391 "send_buf_size": 2097152, 00:19:37.391 "enable_recv_pipe": true, 00:19:37.391 "enable_quickack": false, 00:19:37.391 "enable_placement_id": 0, 00:19:37.391 "enable_zerocopy_send_server": true, 00:19:37.391 "enable_zerocopy_send_client": false, 00:19:37.391 "zerocopy_threshold": 0, 00:19:37.391 "tls_version": 0, 00:19:37.391 "enable_ktls": false 00:19:37.391 } 00:19:37.391 } 00:19:37.391 ] 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "subsystem": "vmd", 00:19:37.391 "config": [] 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "subsystem": "accel", 00:19:37.391 "config": [ 00:19:37.391 { 00:19:37.391 "method": "accel_set_options", 00:19:37.391 "params": { 00:19:37.391 "small_cache_size": 128, 00:19:37.391 "large_cache_size": 16, 00:19:37.391 "task_count": 2048, 00:19:37.391 "sequence_count": 2048, 00:19:37.391 "buf_count": 2048 00:19:37.391 } 00:19:37.391 } 00:19:37.391 ] 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "subsystem": "bdev", 00:19:37.391 "config": [ 00:19:37.391 { 00:19:37.391 "method": "bdev_set_options", 00:19:37.391 "params": { 00:19:37.391 "bdev_io_pool_size": 65535, 00:19:37.391 "bdev_io_cache_size": 256, 00:19:37.391 "bdev_auto_examine": true, 00:19:37.391 "iobuf_small_cache_size": 128, 00:19:37.391 "iobuf_large_cache_size": 16 00:19:37.391 } 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "method": "bdev_raid_set_options", 00:19:37.391 "params": { 00:19:37.391 "process_window_size_kb": 1024, 00:19:37.391 "process_max_bandwidth_mb_sec": 0 00:19:37.391 } 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "method": "bdev_iscsi_set_options", 00:19:37.391 "params": { 00:19:37.391 "timeout_sec": 30 00:19:37.391 } 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "method": "bdev_nvme_set_options", 00:19:37.391 "params": { 00:19:37.391 "action_on_timeout": "none", 00:19:37.391 "timeout_us": 0, 00:19:37.391 "timeout_admin_us": 0, 00:19:37.391 "keep_alive_timeout_ms": 10000, 00:19:37.391 "arbitration_burst": 0, 00:19:37.391 "low_priority_weight": 0, 00:19:37.391 "medium_priority_weight": 0, 00:19:37.391 "high_priority_weight": 0, 00:19:37.391 "nvme_adminq_poll_period_us": 10000, 00:19:37.391 "nvme_ioq_poll_period_us": 0, 00:19:37.391 "io_queue_requests": 0, 00:19:37.391 "delay_cmd_submit": true, 00:19:37.391 "transport_retry_count": 4, 00:19:37.391 "bdev_retry_count": 3, 00:19:37.391 "transport_ack_timeout": 0, 00:19:37.391 "ctrlr_loss_timeout_sec": 0, 00:19:37.391 "reconnect_delay_sec": 0, 00:19:37.391 "fast_io_fail_timeout_sec": 0, 00:19:37.391 "disable_auto_failback": false, 00:19:37.391 "generate_uuids": false, 00:19:37.391 "transport_tos": 0, 00:19:37.391 "nvme_error_stat": false, 00:19:37.391 "rdma_srq_size": 0, 00:19:37.391 "io_path_stat": false, 00:19:37.391 "allow_accel_sequence": false, 00:19:37.391 "rdma_max_cq_size": 0, 00:19:37.391 "rdma_cm_event_timeout_ms": 0, 00:19:37.391 "dhchap_digests": [ 00:19:37.391 "sha256", 00:19:37.391 "sha384", 00:19:37.391 "sha512" 00:19:37.391 ], 00:19:37.391 "dhchap_dhgroups": [ 00:19:37.391 "null", 00:19:37.391 "ffdhe2048", 00:19:37.391 "ffdhe3072", 00:19:37.391 "ffdhe4096", 00:19:37.391 "ffdhe6144", 00:19:37.391 "ffdhe8192" 00:19:37.391 ] 00:19:37.391 } 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "method": "bdev_nvme_set_hotplug", 00:19:37.391 "params": { 00:19:37.391 "period_us": 100000, 00:19:37.391 "enable": false 00:19:37.391 } 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "method": "bdev_malloc_create", 00:19:37.391 "params": { 00:19:37.391 "name": "malloc0", 00:19:37.391 "num_blocks": 8192, 00:19:37.391 "block_size": 4096, 00:19:37.391 "physical_block_size": 4096, 00:19:37.391 "uuid": "fb17be57-bd47-4977-865d-a49e69a864c1", 00:19:37.391 "optimal_io_boundary": 0, 00:19:37.391 "md_size": 0, 00:19:37.391 "dif_type": 0, 00:19:37.391 "dif_is_head_of_md": false, 00:19:37.391 "dif_pi_format": 0 00:19:37.391 } 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "method": "bdev_wait_for_examine" 00:19:37.391 } 00:19:37.391 ] 00:19:37.391 }, 00:19:37.391 { 00:19:37.391 "subsystem": "nbd", 00:19:37.391 "config": [] 00:19:37.391 }, 00:19:37.391 { 00:19:37.392 "subsystem": "scheduler", 00:19:37.392 "config": [ 00:19:37.392 { 00:19:37.392 "method": "framework_set_scheduler", 00:19:37.392 "params": { 00:19:37.392 "name": "static" 00:19:37.392 } 00:19:37.392 } 00:19:37.392 ] 00:19:37.392 }, 00:19:37.392 { 00:19:37.392 "subsystem": "nvmf", 00:19:37.392 "config": [ 00:19:37.392 { 00:19:37.392 "method": "nvmf_set_config", 00:19:37.392 "params": { 00:19:37.392 "discovery_filter": "match_any", 00:19:37.392 "admin_cmd_passthru": { 00:19:37.392 "identify_ctrlr": false 00:19:37.392 } 00:19:37.392 } 00:19:37.392 }, 00:19:37.392 { 00:19:37.392 "method": "nvmf_set_max_subsystems", 00:19:37.392 "params": { 00:19:37.392 "max_subsystems": 1024 00:19:37.392 } 00:19:37.392 }, 00:19:37.392 { 00:19:37.392 "method": "nvmf_set_crdt", 00:19:37.392 "params": { 00:19:37.392 "crdt1": 0, 00:19:37.392 "crdt2": 0, 00:19:37.392 "crdt3": 0 00:19:37.392 } 00:19:37.392 }, 00:19:37.392 { 00:19:37.392 "method": "nvmf_create_transport", 00:19:37.392 "params": { 00:19:37.392 "trtype": "TCP", 00:19:37.392 "max_queue_depth": 128, 00:19:37.392 "max_io_qpairs_per_ctrlr": 127, 00:19:37.392 "in_capsule_data_size": 4096, 00:19:37.392 "max_io_size": 131072, 00:19:37.392 "io_unit_size": 131072, 00:19:37.392 "max_aq_depth": 128, 00:19:37.392 "num_shared_buffers": 511, 00:19:37.392 "buf_cache_size": 4294967295, 00:19:37.392 "dif_insert_or_strip": false, 00:19:37.392 "zcopy": false, 00:19:37.392 "c2h_success": false, 00:19:37.392 "sock_priority": 0, 00:19:37.392 "abort_timeout_sec": 1, 00:19:37.392 "ack_timeout": 0, 00:19:37.392 "data_wr_pool_size": 0 00:19:37.392 } 00:19:37.392 }, 00:19:37.392 { 00:19:37.392 "method": "nvmf_create_subsystem", 00:19:37.392 "params": { 00:19:37.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.392 "allow_any_host": false, 00:19:37.392 "serial_number": "SPDK00000000000001", 00:19:37.392 "model_number": "SPDK bdev Controller", 00:19:37.392 "max_namespaces": 10, 00:19:37.392 "min_cntlid": 1, 00:19:37.392 "max_cntlid": 65519, 00:19:37.392 "ana_reporting": false 00:19:37.392 } 00:19:37.392 }, 00:19:37.392 { 00:19:37.392 "method": "nvmf_subsystem_add_host", 00:19:37.392 "params": { 00:19:37.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.392 "host": "nqn.2016-06.io.spdk:host1", 00:19:37.392 "psk": "/tmp/tmp.pO6GwbkNDs" 00:19:37.392 } 00:19:37.392 }, 00:19:37.392 { 00:19:37.392 "method": "nvmf_subsystem_add_ns", 00:19:37.392 "params": { 00:19:37.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.392 "namespace": { 00:19:37.392 "nsid": 1, 00:19:37.392 "bdev_name": "malloc0", 00:19:37.392 "nguid": "FB17BE57BD474977865DA49E69A864C1", 00:19:37.392 "uuid": "fb17be57-bd47-4977-865d-a49e69a864c1", 00:19:37.392 "no_auto_visible": false 00:19:37.392 } 00:19:37.392 } 00:19:37.392 }, 00:19:37.392 { 00:19:37.392 "method": "nvmf_subsystem_add_listener", 00:19:37.392 "params": { 00:19:37.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.392 "listen_address": { 00:19:37.392 "trtype": "TCP", 00:19:37.392 "adrfam": "IPv4", 00:19:37.392 "traddr": "10.0.0.2", 00:19:37.392 "trsvcid": "4420" 00:19:37.392 }, 00:19:37.392 "secure_channel": true 00:19:37.392 } 00:19:37.392 } 00:19:37.392 ] 00:19:37.392 } 00:19:37.392 ] 00:19:37.392 }' 00:19:37.392 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:37.652 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:37.652 "subsystems": [ 00:19:37.652 { 00:19:37.652 "subsystem": "keyring", 00:19:37.652 "config": [] 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "subsystem": "iobuf", 00:19:37.652 "config": [ 00:19:37.652 { 00:19:37.652 "method": "iobuf_set_options", 00:19:37.652 "params": { 00:19:37.652 "small_pool_count": 8192, 00:19:37.652 "large_pool_count": 1024, 00:19:37.652 "small_bufsize": 8192, 00:19:37.652 "large_bufsize": 135168 00:19:37.652 } 00:19:37.652 } 00:19:37.652 ] 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "subsystem": "sock", 00:19:37.652 "config": [ 00:19:37.652 { 00:19:37.652 "method": "sock_set_default_impl", 00:19:37.652 "params": { 00:19:37.652 "impl_name": "posix" 00:19:37.652 } 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "method": "sock_impl_set_options", 00:19:37.652 "params": { 00:19:37.652 "impl_name": "ssl", 00:19:37.652 "recv_buf_size": 4096, 00:19:37.652 "send_buf_size": 4096, 00:19:37.652 "enable_recv_pipe": true, 00:19:37.652 "enable_quickack": false, 00:19:37.652 "enable_placement_id": 0, 00:19:37.652 "enable_zerocopy_send_server": true, 00:19:37.652 "enable_zerocopy_send_client": false, 00:19:37.652 "zerocopy_threshold": 0, 00:19:37.652 "tls_version": 0, 00:19:37.652 "enable_ktls": false 00:19:37.652 } 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "method": "sock_impl_set_options", 00:19:37.652 "params": { 00:19:37.652 "impl_name": "posix", 00:19:37.652 "recv_buf_size": 2097152, 00:19:37.652 "send_buf_size": 2097152, 00:19:37.652 "enable_recv_pipe": true, 00:19:37.652 "enable_quickack": false, 00:19:37.652 "enable_placement_id": 0, 00:19:37.652 "enable_zerocopy_send_server": true, 00:19:37.652 "enable_zerocopy_send_client": false, 00:19:37.652 "zerocopy_threshold": 0, 00:19:37.652 "tls_version": 0, 00:19:37.652 "enable_ktls": false 00:19:37.652 } 00:19:37.652 } 00:19:37.652 ] 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "subsystem": "vmd", 00:19:37.652 "config": [] 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "subsystem": "accel", 00:19:37.652 "config": [ 00:19:37.652 { 00:19:37.652 "method": "accel_set_options", 00:19:37.652 "params": { 00:19:37.652 "small_cache_size": 128, 00:19:37.652 "large_cache_size": 16, 00:19:37.652 "task_count": 2048, 00:19:37.652 "sequence_count": 2048, 00:19:37.652 "buf_count": 2048 00:19:37.652 } 00:19:37.652 } 00:19:37.652 ] 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "subsystem": "bdev", 00:19:37.652 "config": [ 00:19:37.652 { 00:19:37.652 "method": "bdev_set_options", 00:19:37.652 "params": { 00:19:37.652 "bdev_io_pool_size": 65535, 00:19:37.652 "bdev_io_cache_size": 256, 00:19:37.652 "bdev_auto_examine": true, 00:19:37.652 "iobuf_small_cache_size": 128, 00:19:37.652 "iobuf_large_cache_size": 16 00:19:37.652 } 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "method": "bdev_raid_set_options", 00:19:37.652 "params": { 00:19:37.652 "process_window_size_kb": 1024, 00:19:37.652 "process_max_bandwidth_mb_sec": 0 00:19:37.652 } 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "method": "bdev_iscsi_set_options", 00:19:37.652 "params": { 00:19:37.652 "timeout_sec": 30 00:19:37.652 } 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "method": "bdev_nvme_set_options", 00:19:37.652 "params": { 00:19:37.652 "action_on_timeout": "none", 00:19:37.652 "timeout_us": 0, 00:19:37.652 "timeout_admin_us": 0, 00:19:37.652 "keep_alive_timeout_ms": 10000, 00:19:37.652 "arbitration_burst": 0, 00:19:37.652 "low_priority_weight": 0, 00:19:37.652 "medium_priority_weight": 0, 00:19:37.652 "high_priority_weight": 0, 00:19:37.652 "nvme_adminq_poll_period_us": 10000, 00:19:37.652 "nvme_ioq_poll_period_us": 0, 00:19:37.652 "io_queue_requests": 512, 00:19:37.652 "delay_cmd_submit": true, 00:19:37.652 "transport_retry_count": 4, 00:19:37.652 "bdev_retry_count": 3, 00:19:37.652 "transport_ack_timeout": 0, 00:19:37.652 "ctrlr_loss_timeout_sec": 0, 00:19:37.652 "reconnect_delay_sec": 0, 00:19:37.652 "fast_io_fail_timeout_sec": 0, 00:19:37.652 "disable_auto_failback": false, 00:19:37.652 "generate_uuids": false, 00:19:37.652 "transport_tos": 0, 00:19:37.652 "nvme_error_stat": false, 00:19:37.652 "rdma_srq_size": 0, 00:19:37.652 "io_path_stat": false, 00:19:37.652 "allow_accel_sequence": false, 00:19:37.652 "rdma_max_cq_size": 0, 00:19:37.652 "rdma_cm_event_timeout_ms": 0, 00:19:37.652 "dhchap_digests": [ 00:19:37.652 "sha256", 00:19:37.652 "sha384", 00:19:37.652 "sha512" 00:19:37.652 ], 00:19:37.652 "dhchap_dhgroups": [ 00:19:37.652 "null", 00:19:37.652 "ffdhe2048", 00:19:37.652 "ffdhe3072", 00:19:37.652 "ffdhe4096", 00:19:37.652 "ffdhe6144", 00:19:37.652 "ffdhe8192" 00:19:37.652 ] 00:19:37.652 } 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "method": "bdev_nvme_attach_controller", 00:19:37.652 "params": { 00:19:37.652 "name": "TLSTEST", 00:19:37.652 "trtype": "TCP", 00:19:37.652 "adrfam": "IPv4", 00:19:37.652 "traddr": "10.0.0.2", 00:19:37.652 "trsvcid": "4420", 00:19:37.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.652 "prchk_reftag": false, 00:19:37.652 "prchk_guard": false, 00:19:37.652 "ctrlr_loss_timeout_sec": 0, 00:19:37.652 "reconnect_delay_sec": 0, 00:19:37.652 "fast_io_fail_timeout_sec": 0, 00:19:37.652 "psk": "/tmp/tmp.pO6GwbkNDs", 00:19:37.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.652 "hdgst": false, 00:19:37.652 "ddgst": false 00:19:37.652 } 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "method": "bdev_nvme_set_hotplug", 00:19:37.652 "params": { 00:19:37.652 "period_us": 100000, 00:19:37.652 "enable": false 00:19:37.652 } 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "method": "bdev_wait_for_examine" 00:19:37.652 } 00:19:37.652 ] 00:19:37.652 }, 00:19:37.652 { 00:19:37.652 "subsystem": "nbd", 00:19:37.652 "config": [] 00:19:37.652 } 00:19:37.652 ] 00:19:37.652 }' 00:19:37.652 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1556548 00:19:37.652 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1556548 ']' 00:19:37.652 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1556548 00:19:37.652 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:37.652 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.653 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1556548 00:19:37.653 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:37.653 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:37.653 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1556548' 00:19:37.653 killing process with pid 1556548 00:19:37.912 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1556548 00:19:37.912 Received shutdown signal, test time was about 10.000000 seconds 00:19:37.912 00:19:37.912 Latency(us) 00:19:37.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.912 =================================================================================================================== 00:19:37.912 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:37.912 [2024-07-24 19:20:23.890522] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:37.912 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1556548 00:19:37.912 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1556115 00:19:37.912 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1556115 ']' 00:19:37.912 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1556115 00:19:37.912 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:37.912 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.912 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1556115 00:19:37.912 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:37.912 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:37.912 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1556115' 00:19:37.912 killing process with pid 1556115 00:19:37.912 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1556115 00:19:37.912 [2024-07-24 19:20:24.123328] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:37.912 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1556115 00:19:38.172 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:38.172 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.172 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.172 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:38.172 "subsystems": [ 00:19:38.172 { 00:19:38.172 "subsystem": "keyring", 00:19:38.172 "config": [] 00:19:38.172 }, 00:19:38.172 { 00:19:38.172 "subsystem": "iobuf", 00:19:38.172 "config": [ 00:19:38.172 { 00:19:38.172 "method": "iobuf_set_options", 00:19:38.172 "params": { 00:19:38.172 "small_pool_count": 8192, 00:19:38.172 "large_pool_count": 1024, 00:19:38.172 "small_bufsize": 8192, 00:19:38.172 "large_bufsize": 135168 00:19:38.172 } 00:19:38.172 } 00:19:38.172 ] 00:19:38.172 }, 00:19:38.172 { 00:19:38.172 "subsystem": "sock", 00:19:38.172 "config": [ 00:19:38.172 { 00:19:38.172 "method": "sock_set_default_impl", 00:19:38.172 "params": { 00:19:38.172 "impl_name": "posix" 00:19:38.172 } 00:19:38.172 }, 00:19:38.172 { 00:19:38.172 "method": "sock_impl_set_options", 00:19:38.172 "params": { 00:19:38.172 "impl_name": "ssl", 00:19:38.172 "recv_buf_size": 4096, 00:19:38.172 "send_buf_size": 4096, 00:19:38.172 "enable_recv_pipe": true, 00:19:38.172 "enable_quickack": false, 00:19:38.172 "enable_placement_id": 0, 00:19:38.172 "enable_zerocopy_send_server": true, 00:19:38.172 "enable_zerocopy_send_client": false, 00:19:38.172 "zerocopy_threshold": 0, 00:19:38.172 "tls_version": 0, 00:19:38.172 "enable_ktls": false 00:19:38.172 } 00:19:38.172 }, 00:19:38.172 { 00:19:38.172 "method": "sock_impl_set_options", 00:19:38.172 "params": { 00:19:38.172 "impl_name": "posix", 00:19:38.172 "recv_buf_size": 2097152, 00:19:38.172 "send_buf_size": 2097152, 00:19:38.172 "enable_recv_pipe": true, 00:19:38.172 "enable_quickack": false, 00:19:38.172 "enable_placement_id": 0, 00:19:38.172 "enable_zerocopy_send_server": true, 00:19:38.172 "enable_zerocopy_send_client": false, 00:19:38.172 "zerocopy_threshold": 0, 00:19:38.172 "tls_version": 0, 00:19:38.172 "enable_ktls": false 00:19:38.172 } 00:19:38.172 } 00:19:38.172 ] 00:19:38.172 }, 00:19:38.172 { 00:19:38.172 "subsystem": "vmd", 00:19:38.172 "config": [] 00:19:38.172 }, 00:19:38.172 { 00:19:38.172 "subsystem": "accel", 00:19:38.172 "config": [ 00:19:38.172 { 00:19:38.172 "method": "accel_set_options", 00:19:38.172 "params": { 00:19:38.172 "small_cache_size": 128, 00:19:38.172 "large_cache_size": 16, 00:19:38.172 "task_count": 2048, 00:19:38.172 "sequence_count": 2048, 00:19:38.172 "buf_count": 2048 00:19:38.172 } 00:19:38.172 } 00:19:38.172 ] 00:19:38.172 }, 00:19:38.172 { 00:19:38.172 "subsystem": "bdev", 00:19:38.172 "config": [ 00:19:38.172 { 00:19:38.172 "method": "bdev_set_options", 00:19:38.172 "params": { 00:19:38.172 "bdev_io_pool_size": 65535, 00:19:38.172 "bdev_io_cache_size": 256, 00:19:38.172 "bdev_auto_examine": true, 00:19:38.172 "iobuf_small_cache_size": 128, 00:19:38.172 "iobuf_large_cache_size": 16 00:19:38.172 } 00:19:38.172 }, 00:19:38.172 { 00:19:38.172 "method": "bdev_raid_set_options", 00:19:38.172 "params": { 00:19:38.172 "process_window_size_kb": 1024, 00:19:38.172 "process_max_bandwidth_mb_sec": 0 00:19:38.172 } 00:19:38.172 }, 00:19:38.172 { 00:19:38.172 "method": "bdev_iscsi_set_options", 00:19:38.172 "params": { 00:19:38.172 "timeout_sec": 30 00:19:38.172 } 00:19:38.172 }, 00:19:38.172 { 00:19:38.172 "method": "bdev_nvme_set_options", 00:19:38.172 "params": { 00:19:38.172 "action_on_timeout": "none", 00:19:38.172 "timeout_us": 0, 00:19:38.172 "timeout_admin_us": 0, 00:19:38.172 "keep_alive_timeout_ms": 10000, 00:19:38.172 "arbitration_burst": 0, 00:19:38.172 "low_priority_weight": 0, 00:19:38.172 "medium_priority_weight": 0, 00:19:38.172 "high_priority_weight": 0, 00:19:38.172 "nvme_adminq_poll_period_us": 10000, 00:19:38.172 "nvme_ioq_poll_period_us": 0, 00:19:38.172 "io_queue_requests": 0, 00:19:38.172 "delay_cmd_submit": true, 00:19:38.173 "transport_retry_count": 4, 00:19:38.173 "bdev_retry_count": 3, 00:19:38.173 "transport_ack_timeout": 0, 00:19:38.173 "ctrlr_loss_timeout_sec": 0, 00:19:38.173 "reconnect_delay_sec": 0, 00:19:38.173 "fast_io_fail_timeout_sec": 0, 00:19:38.173 "disable_auto_failback": false, 00:19:38.173 "generate_uuids": false, 00:19:38.173 "transport_tos": 0, 00:19:38.173 "nvme_error_stat": false, 00:19:38.173 "rdma_srq_size": 0, 00:19:38.173 "io_path_stat": false, 00:19:38.173 "allow_accel_sequence": false, 00:19:38.173 "rdma_max_cq_size": 0, 00:19:38.173 "rdma_cm_event_timeout_ms": 0, 00:19:38.173 "dhchap_digests": [ 00:19:38.173 "sha256", 00:19:38.173 "sha384", 00:19:38.173 "sha512" 00:19:38.173 ], 00:19:38.173 "dhchap_dhgroups": [ 00:19:38.173 "null", 00:19:38.173 "ffdhe2048", 00:19:38.173 "ffdhe3072", 00:19:38.173 "ffdhe4096", 00:19:38.173 "ffdhe6144", 00:19:38.173 "ffdhe8192" 00:19:38.173 ] 00:19:38.173 } 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "method": "bdev_nvme_set_hotplug", 00:19:38.173 "params": { 00:19:38.173 "period_us": 100000, 00:19:38.173 "enable": false 00:19:38.173 } 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "method": "bdev_malloc_create", 00:19:38.173 "params": { 00:19:38.173 "name": "malloc0", 00:19:38.173 "num_blocks": 8192, 00:19:38.173 "block_size": 4096, 00:19:38.173 "physical_block_size": 4096, 00:19:38.173 "uuid": "fb17be57-bd47-4977-865d-a49e69a864c1", 00:19:38.173 "optimal_io_boundary": 0, 00:19:38.173 "md_size": 0, 00:19:38.173 "dif_type": 0, 00:19:38.173 "dif_is_head_of_md": false, 00:19:38.173 "dif_pi_format": 0 00:19:38.173 } 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "method": "bdev_wait_for_examine" 00:19:38.173 } 00:19:38.173 ] 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "subsystem": "nbd", 00:19:38.173 "config": [] 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "subsystem": "scheduler", 00:19:38.173 "config": [ 00:19:38.173 { 00:19:38.173 "method": "framework_set_scheduler", 00:19:38.173 "params": { 00:19:38.173 "name": "static" 00:19:38.173 } 00:19:38.173 } 00:19:38.173 ] 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "subsystem": "nvmf", 00:19:38.173 "config": [ 00:19:38.173 { 00:19:38.173 "method": "nvmf_set_config", 00:19:38.173 "params": { 00:19:38.173 "discovery_filter": "match_any", 00:19:38.173 "admin_cmd_passthru": { 00:19:38.173 "identify_ctrlr": false 00:19:38.173 } 00:19:38.173 } 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "method": "nvmf_set_max_subsystems", 00:19:38.173 "params": { 00:19:38.173 "max_subsystems": 1024 00:19:38.173 } 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "method": "nvmf_set_crdt", 00:19:38.173 "params": { 00:19:38.173 "crdt1": 0, 00:19:38.173 "crdt2": 0, 00:19:38.173 "crdt3": 0 00:19:38.173 } 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "method": "nvmf_create_transport", 00:19:38.173 "params": { 00:19:38.173 "trtype": "TCP", 00:19:38.173 "max_queue_depth": 128, 00:19:38.173 "max_io_qpairs_per_ctrlr": 127, 00:19:38.173 "in_capsule_data_size": 4096, 00:19:38.173 "max_io_size": 131072, 00:19:38.173 "io_unit_size": 131072, 00:19:38.173 "max_aq_depth": 128, 00:19:38.173 "num_shared_buffers": 511, 00:19:38.173 "buf_cache_size": 4294967295, 00:19:38.173 "dif_insert_or_strip": false, 00:19:38.173 "zcopy": false, 00:19:38.173 "c2h_success": false, 00:19:38.173 "sock_priority": 0, 00:19:38.173 "abort_timeout_sec": 1, 00:19:38.173 "ack_timeout": 0, 00:19:38.173 "data_wr_pool_size": 0 00:19:38.173 } 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "method": "nvmf_create_subsystem", 00:19:38.173 "params": { 00:19:38.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.173 "allow_any_host": false, 00:19:38.173 "serial_number": "SPDK00000000000001", 00:19:38.173 "model_number": "SPDK bdev Controller", 00:19:38.173 "max_namespaces": 10, 00:19:38.173 "min_cntlid": 1, 00:19:38.173 "max_cntlid": 65519, 00:19:38.173 "ana_reporting": false 00:19:38.173 } 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "method": "nvmf_subsystem_add_host", 00:19:38.173 "params": { 00:19:38.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.173 "host": "nqn.2016-06.io.spdk:host1", 00:19:38.173 "psk": "/tmp/tmp.pO6GwbkNDs" 00:19:38.173 } 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "method": "nvmf_subsystem_add_ns", 00:19:38.173 "params": { 00:19:38.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.173 "namespace": { 00:19:38.173 "nsid": 1, 00:19:38.173 "bdev_name": "malloc0", 00:19:38.173 "nguid": "FB17BE57BD474977865DA49E69A864C1", 00:19:38.173 "uuid": "fb17be57-bd47-4977-865d-a49e69a864c1", 00:19:38.173 "no_auto_visible": false 00:19:38.173 } 00:19:38.173 } 00:19:38.173 }, 00:19:38.173 { 00:19:38.173 "method": "nvmf_subsystem_add_listener", 00:19:38.173 "params": { 00:19:38.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.173 "listen_address": { 00:19:38.173 "trtype": "TCP", 00:19:38.173 "adrfam": "IPv4", 00:19:38.173 "traddr": "10.0.0.2", 00:19:38.173 "trsvcid": "4420" 00:19:38.173 }, 00:19:38.173 "secure_channel": true 00:19:38.173 } 00:19:38.173 } 00:19:38.173 ] 00:19:38.173 } 00:19:38.173 ] 00:19:38.173 }' 00:19:38.173 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.173 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1556836 00:19:38.173 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:38.173 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1556836 00:19:38.173 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1556836 ']' 00:19:38.173 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.173 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.173 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.173 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.173 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.173 [2024-07-24 19:20:24.369031] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:38.173 [2024-07-24 19:20:24.369082] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.173 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.433 [2024-07-24 19:20:24.442468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.433 [2024-07-24 19:20:24.506209] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.433 [2024-07-24 19:20:24.506253] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.433 [2024-07-24 19:20:24.506262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.433 [2024-07-24 19:20:24.506270] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.433 [2024-07-24 19:20:24.506277] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.433 [2024-07-24 19:20:24.506334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.692 [2024-07-24 19:20:24.708570] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.692 [2024-07-24 19:20:24.740719] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:38.692 [2024-07-24 19:20:24.756774] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:38.692 [2024-07-24 19:20:24.756963] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.951 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.951 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:38.951 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:38.951 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:38.951 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.211 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.211 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1557036 00:19:39.211 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1557036 /var/tmp/bdevperf.sock 00:19:39.211 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1557036 ']' 00:19:39.211 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.211 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.211 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:39.211 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.211 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.211 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:39.211 "subsystems": [ 00:19:39.211 { 00:19:39.211 "subsystem": "keyring", 00:19:39.211 "config": [] 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "subsystem": "iobuf", 00:19:39.211 "config": [ 00:19:39.211 { 00:19:39.211 "method": "iobuf_set_options", 00:19:39.211 "params": { 00:19:39.211 "small_pool_count": 8192, 00:19:39.211 "large_pool_count": 1024, 00:19:39.211 "small_bufsize": 8192, 00:19:39.211 "large_bufsize": 135168 00:19:39.211 } 00:19:39.211 } 00:19:39.211 ] 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "subsystem": "sock", 00:19:39.211 "config": [ 00:19:39.211 { 00:19:39.211 "method": "sock_set_default_impl", 00:19:39.211 "params": { 00:19:39.211 "impl_name": "posix" 00:19:39.211 } 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "method": "sock_impl_set_options", 00:19:39.211 "params": { 00:19:39.211 "impl_name": "ssl", 00:19:39.211 "recv_buf_size": 4096, 00:19:39.211 "send_buf_size": 4096, 00:19:39.211 "enable_recv_pipe": true, 00:19:39.211 "enable_quickack": false, 00:19:39.211 "enable_placement_id": 0, 00:19:39.211 "enable_zerocopy_send_server": true, 00:19:39.211 "enable_zerocopy_send_client": false, 00:19:39.211 "zerocopy_threshold": 0, 00:19:39.211 "tls_version": 0, 00:19:39.211 "enable_ktls": false 00:19:39.211 } 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "method": "sock_impl_set_options", 00:19:39.211 "params": { 00:19:39.211 "impl_name": "posix", 00:19:39.211 "recv_buf_size": 2097152, 00:19:39.211 "send_buf_size": 2097152, 00:19:39.211 "enable_recv_pipe": true, 00:19:39.211 "enable_quickack": false, 00:19:39.211 "enable_placement_id": 0, 00:19:39.211 "enable_zerocopy_send_server": true, 00:19:39.211 "enable_zerocopy_send_client": false, 00:19:39.211 "zerocopy_threshold": 0, 00:19:39.211 "tls_version": 0, 00:19:39.211 "enable_ktls": false 00:19:39.211 } 00:19:39.211 } 00:19:39.211 ] 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "subsystem": "vmd", 00:19:39.211 "config": [] 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "subsystem": "accel", 00:19:39.211 "config": [ 00:19:39.211 { 00:19:39.211 "method": "accel_set_options", 00:19:39.211 "params": { 00:19:39.211 "small_cache_size": 128, 00:19:39.211 "large_cache_size": 16, 00:19:39.211 "task_count": 2048, 00:19:39.211 "sequence_count": 2048, 00:19:39.211 "buf_count": 2048 00:19:39.211 } 00:19:39.211 } 00:19:39.211 ] 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "subsystem": "bdev", 00:19:39.211 "config": [ 00:19:39.211 { 00:19:39.211 "method": "bdev_set_options", 00:19:39.211 "params": { 00:19:39.211 "bdev_io_pool_size": 65535, 00:19:39.211 "bdev_io_cache_size": 256, 00:19:39.211 "bdev_auto_examine": true, 00:19:39.211 "iobuf_small_cache_size": 128, 00:19:39.211 "iobuf_large_cache_size": 16 00:19:39.211 } 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "method": "bdev_raid_set_options", 00:19:39.211 "params": { 00:19:39.211 "process_window_size_kb": 1024, 00:19:39.211 "process_max_bandwidth_mb_sec": 0 00:19:39.211 } 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "method": "bdev_iscsi_set_options", 00:19:39.211 "params": { 00:19:39.211 "timeout_sec": 30 00:19:39.211 } 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "method": "bdev_nvme_set_options", 00:19:39.211 "params": { 00:19:39.211 "action_on_timeout": "none", 00:19:39.211 "timeout_us": 0, 00:19:39.211 "timeout_admin_us": 0, 00:19:39.211 "keep_alive_timeout_ms": 10000, 00:19:39.211 "arbitration_burst": 0, 00:19:39.211 "low_priority_weight": 0, 00:19:39.211 "medium_priority_weight": 0, 00:19:39.211 "high_priority_weight": 0, 00:19:39.211 "nvme_adminq_poll_period_us": 10000, 00:19:39.211 "nvme_ioq_poll_period_us": 0, 00:19:39.211 "io_queue_requests": 512, 00:19:39.211 "delay_cmd_submit": true, 00:19:39.211 "transport_retry_count": 4, 00:19:39.211 "bdev_retry_count": 3, 00:19:39.211 "transport_ack_timeout": 0, 00:19:39.211 "ctrlr_loss_timeout_sec": 0, 00:19:39.211 "reconnect_delay_sec": 0, 00:19:39.211 "fast_io_fail_timeout_sec": 0, 00:19:39.211 "disable_auto_failback": false, 00:19:39.211 "generate_uuids": false, 00:19:39.211 "transport_tos": 0, 00:19:39.211 "nvme_error_stat": false, 00:19:39.211 "rdma_srq_size": 0, 00:19:39.211 "io_path_stat": false, 00:19:39.211 "allow_accel_sequence": false, 00:19:39.212 "rdma_max_cq_size": 0, 00:19:39.212 "rdma_cm_event_timeout_ms": 0, 00:19:39.212 "dhchap_digests": [ 00:19:39.212 "sha256", 00:19:39.212 "sha384", 00:19:39.212 "sha512" 00:19:39.212 ], 00:19:39.212 "dhchap_dhgroups": [ 00:19:39.212 "null", 00:19:39.212 "ffdhe2048", 00:19:39.212 "ffdhe3072", 00:19:39.212 "ffdhe4096", 00:19:39.212 "ffdhe6144", 00:19:39.212 "ffdhe8192" 00:19:39.212 ] 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "bdev_nvme_attach_controller", 00:19:39.212 "params": { 00:19:39.212 "name": "TLSTEST", 00:19:39.212 "trtype": "TCP", 00:19:39.212 "adrfam": "IPv4", 00:19:39.212 "traddr": "10.0.0.2", 00:19:39.212 "trsvcid": "4420", 00:19:39.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.212 "prchk_reftag": false, 00:19:39.212 "prchk_guard": false, 00:19:39.212 "ctrlr_loss_timeout_sec": 0, 00:19:39.212 "reconnect_delay_sec": 0, 00:19:39.212 "fast_io_fail_timeout_sec": 0, 00:19:39.212 "psk": "/tmp/tmp.pO6GwbkNDs", 00:19:39.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.212 "hdgst": false, 00:19:39.212 "ddgst": false 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "bdev_nvme_set_hotplug", 00:19:39.212 "params": { 00:19:39.212 "period_us": 100000, 00:19:39.212 "enable": false 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "bdev_wait_for_examine" 00:19:39.212 } 00:19:39.212 ] 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "subsystem": "nbd", 00:19:39.212 "config": [] 00:19:39.212 } 00:19:39.212 ] 00:19:39.212 }' 00:19:39.212 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.212 [2024-07-24 19:20:25.258762] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:39.212 [2024-07-24 19:20:25.258815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557036 ] 00:19:39.212 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.212 [2024-07-24 19:20:25.325684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.212 [2024-07-24 19:20:25.397138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.471 [2024-07-24 19:20:25.538834] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.471 [2024-07-24 19:20:25.538914] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:40.039 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.039 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:40.039 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:40.039 Running I/O for 10 seconds... 00:19:50.015 00:19:50.015 Latency(us) 00:19:50.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.015 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.015 Verification LBA range: start 0x0 length 0x2000 00:19:50.015 TLSTESTn1 : 10.02 5473.95 21.38 0.00 0.00 23340.48 5085.59 56623.10 00:19:50.015 =================================================================================================================== 00:19:50.015 Total : 5473.95 21.38 0.00 0.00 23340.48 5085.59 56623.10 00:19:50.015 0 00:19:50.015 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.015 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1557036 00:19:50.015 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1557036 ']' 00:19:50.015 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1557036 00:19:50.015 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.015 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.016 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1557036 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1557036' 00:19:50.275 killing process with pid 1557036 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1557036 00:19:50.275 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.275 00:19:50.275 Latency(us) 00:19:50.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.275 =================================================================================================================== 00:19:50.275 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.275 [2024-07-24 19:20:36.260718] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1557036 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1556836 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1556836 ']' 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1556836 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1556836 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1556836' 00:19:50.275 killing process with pid 1556836 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1556836 00:19:50.275 [2024-07-24 19:20:36.494708] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:50.275 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1556836 00:19:50.534 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:50.534 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:50.534 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:50.534 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.534 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1558967 00:19:50.534 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:50.534 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1558967 00:19:50.534 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1558967 ']' 00:19:50.534 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.535 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.535 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.535 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.535 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.535 [2024-07-24 19:20:36.741695] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:50.535 [2024-07-24 19:20:36.741757] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.794 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.794 [2024-07-24 19:20:36.814931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.794 [2024-07-24 19:20:36.878347] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.794 [2024-07-24 19:20:36.878393] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.794 [2024-07-24 19:20:36.878402] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.794 [2024-07-24 19:20:36.878411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.794 [2024-07-24 19:20:36.878418] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.794 [2024-07-24 19:20:36.878441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.362 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.362 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:51.362 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.362 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:51.362 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.362 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.362 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.pO6GwbkNDs 00:19:51.362 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pO6GwbkNDs 00:19:51.362 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:51.621 [2024-07-24 19:20:37.737769] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.621 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:51.880 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:51.880 [2024-07-24 19:20:38.086664] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:51.880 [2024-07-24 19:20:38.086868] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.880 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:52.139 malloc0 00:19:52.139 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.398 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pO6GwbkNDs 00:19:52.398 [2024-07-24 19:20:38.600245] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:52.398 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1559257 00:19:52.398 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:52.399 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.399 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1559257 /var/tmp/bdevperf.sock 00:19:52.399 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1559257 ']' 00:19:52.399 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.399 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.399 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.399 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.399 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.658 [2024-07-24 19:20:38.667043] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:52.658 [2024-07-24 19:20:38.667093] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559257 ] 00:19:52.658 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.658 [2024-07-24 19:20:38.735925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.658 [2024-07-24 19:20:38.805241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.226 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.226 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:53.226 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pO6GwbkNDs 00:19:53.485 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:53.744 [2024-07-24 19:20:39.792599] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.744 nvme0n1 00:19:53.744 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:53.744 Running I/O for 1 seconds... 00:19:55.175 00:19:55.175 Latency(us) 00:19:55.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.175 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:55.175 Verification LBA range: start 0x0 length 0x2000 00:19:55.175 nvme0n1 : 1.02 4793.74 18.73 0.00 0.00 26419.93 6710.89 67947.72 00:19:55.176 =================================================================================================================== 00:19:55.176 Total : 4793.74 18.73 0.00 0.00 26419.93 6710.89 67947.72 00:19:55.176 0 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1559257 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1559257 ']' 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1559257 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1559257 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1559257' 00:19:55.176 killing process with pid 1559257 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1559257 00:19:55.176 Received shutdown signal, test time was about 1.000000 seconds 00:19:55.176 00:19:55.176 Latency(us) 00:19:55.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.176 =================================================================================================================== 00:19:55.176 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1559257 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1558967 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1558967 ']' 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1558967 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1558967 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1558967' 00:19:55.176 killing process with pid 1558967 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1558967 00:19:55.176 [2024-07-24 19:20:41.308450] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:55.176 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1558967 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1559802 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1559802 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1559802 ']' 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.435 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.435 [2024-07-24 19:20:41.537811] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:55.435 [2024-07-24 19:20:41.537866] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.435 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.435 [2024-07-24 19:20:41.600451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.435 [2024-07-24 19:20:41.667573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.435 [2024-07-24 19:20:41.667614] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.435 [2024-07-24 19:20:41.667623] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.435 [2024-07-24 19:20:41.667635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.435 [2024-07-24 19:20:41.667642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.435 [2024-07-24 19:20:41.667667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.373 [2024-07-24 19:20:42.377020] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.373 malloc0 00:19:56.373 [2024-07-24 19:20:42.405533] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.373 [2024-07-24 19:20:42.421061] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1559918 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1559918 /var/tmp/bdevperf.sock 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1559918 ']' 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.373 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.373 [2024-07-24 19:20:42.477680] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:56.373 [2024-07-24 19:20:42.477730] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559918 ] 00:19:56.373 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.373 [2024-07-24 19:20:42.547816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.632 [2024-07-24 19:20:42.623296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.199 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:57.199 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:57.199 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pO6GwbkNDs 00:19:57.458 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:57.458 [2024-07-24 19:20:43.594708] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.458 nvme0n1 00:19:57.458 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:57.715 Running I/O for 1 seconds... 00:19:58.651 00:19:58.651 Latency(us) 00:19:58.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.651 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:58.651 Verification LBA range: start 0x0 length 0x2000 00:19:58.651 nvme0n1 : 1.02 4987.48 19.48 0.00 0.00 25370.86 6422.53 47185.92 00:19:58.651 =================================================================================================================== 00:19:58.651 Total : 4987.48 19.48 0.00 0.00 25370.86 6422.53 47185.92 00:19:58.651 0 00:19:58.651 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:58.651 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.651 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.911 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.911 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:58.911 "subsystems": [ 00:19:58.911 { 00:19:58.911 "subsystem": "keyring", 00:19:58.911 "config": [ 00:19:58.911 { 00:19:58.911 "method": "keyring_file_add_key", 00:19:58.911 "params": { 00:19:58.911 "name": "key0", 00:19:58.911 "path": "/tmp/tmp.pO6GwbkNDs" 00:19:58.911 } 00:19:58.911 } 00:19:58.911 ] 00:19:58.911 }, 00:19:58.911 { 00:19:58.911 "subsystem": "iobuf", 00:19:58.911 "config": [ 00:19:58.911 { 00:19:58.911 "method": "iobuf_set_options", 00:19:58.911 "params": { 00:19:58.911 "small_pool_count": 8192, 00:19:58.911 "large_pool_count": 1024, 00:19:58.911 "small_bufsize": 8192, 00:19:58.911 "large_bufsize": 135168 00:19:58.911 } 00:19:58.911 } 00:19:58.911 ] 00:19:58.911 }, 00:19:58.911 { 00:19:58.911 "subsystem": "sock", 00:19:58.911 "config": [ 00:19:58.911 { 00:19:58.911 "method": "sock_set_default_impl", 00:19:58.911 "params": { 00:19:58.911 "impl_name": "posix" 00:19:58.911 } 00:19:58.911 }, 00:19:58.911 { 00:19:58.911 "method": "sock_impl_set_options", 00:19:58.911 "params": { 00:19:58.911 "impl_name": "ssl", 00:19:58.911 "recv_buf_size": 4096, 00:19:58.911 "send_buf_size": 4096, 00:19:58.911 "enable_recv_pipe": true, 00:19:58.911 "enable_quickack": false, 00:19:58.911 "enable_placement_id": 0, 00:19:58.911 "enable_zerocopy_send_server": true, 00:19:58.911 "enable_zerocopy_send_client": false, 00:19:58.911 "zerocopy_threshold": 0, 00:19:58.911 "tls_version": 0, 00:19:58.911 "enable_ktls": false 00:19:58.911 } 00:19:58.911 }, 00:19:58.911 { 00:19:58.911 "method": "sock_impl_set_options", 00:19:58.911 "params": { 00:19:58.911 "impl_name": "posix", 00:19:58.911 "recv_buf_size": 2097152, 00:19:58.911 "send_buf_size": 2097152, 00:19:58.911 "enable_recv_pipe": true, 00:19:58.911 "enable_quickack": false, 00:19:58.911 "enable_placement_id": 0, 00:19:58.911 "enable_zerocopy_send_server": true, 00:19:58.911 "enable_zerocopy_send_client": false, 00:19:58.911 "zerocopy_threshold": 0, 00:19:58.911 "tls_version": 0, 00:19:58.911 "enable_ktls": false 00:19:58.911 } 00:19:58.911 } 00:19:58.911 ] 00:19:58.911 }, 00:19:58.911 { 00:19:58.911 "subsystem": "vmd", 00:19:58.911 "config": [] 00:19:58.911 }, 00:19:58.911 { 00:19:58.911 "subsystem": "accel", 00:19:58.911 "config": [ 00:19:58.911 { 00:19:58.911 "method": "accel_set_options", 00:19:58.911 "params": { 00:19:58.911 "small_cache_size": 128, 00:19:58.911 "large_cache_size": 16, 00:19:58.911 "task_count": 2048, 00:19:58.911 "sequence_count": 2048, 00:19:58.911 "buf_count": 2048 00:19:58.911 } 00:19:58.911 } 00:19:58.911 ] 00:19:58.911 }, 00:19:58.911 { 00:19:58.911 "subsystem": "bdev", 00:19:58.911 "config": [ 00:19:58.911 { 00:19:58.911 "method": "bdev_set_options", 00:19:58.911 "params": { 00:19:58.911 "bdev_io_pool_size": 65535, 00:19:58.911 "bdev_io_cache_size": 256, 00:19:58.911 "bdev_auto_examine": true, 00:19:58.911 "iobuf_small_cache_size": 128, 00:19:58.911 "iobuf_large_cache_size": 16 00:19:58.911 } 00:19:58.911 }, 00:19:58.911 { 00:19:58.911 "method": "bdev_raid_set_options", 00:19:58.911 "params": { 00:19:58.911 "process_window_size_kb": 1024, 00:19:58.911 "process_max_bandwidth_mb_sec": 0 00:19:58.911 } 00:19:58.911 }, 00:19:58.911 { 00:19:58.911 "method": "bdev_iscsi_set_options", 00:19:58.911 "params": { 00:19:58.911 "timeout_sec": 30 00:19:58.911 } 00:19:58.911 }, 00:19:58.911 { 00:19:58.911 "method": "bdev_nvme_set_options", 00:19:58.911 "params": { 00:19:58.911 "action_on_timeout": "none", 00:19:58.911 "timeout_us": 0, 00:19:58.911 "timeout_admin_us": 0, 00:19:58.911 "keep_alive_timeout_ms": 10000, 00:19:58.911 "arbitration_burst": 0, 00:19:58.911 "low_priority_weight": 0, 00:19:58.911 "medium_priority_weight": 0, 00:19:58.911 "high_priority_weight": 0, 00:19:58.911 "nvme_adminq_poll_period_us": 10000, 00:19:58.911 "nvme_ioq_poll_period_us": 0, 00:19:58.911 "io_queue_requests": 0, 00:19:58.911 "delay_cmd_submit": true, 00:19:58.911 "transport_retry_count": 4, 00:19:58.911 "bdev_retry_count": 3, 00:19:58.911 "transport_ack_timeout": 0, 00:19:58.911 "ctrlr_loss_timeout_sec": 0, 00:19:58.911 "reconnect_delay_sec": 0, 00:19:58.911 "fast_io_fail_timeout_sec": 0, 00:19:58.911 "disable_auto_failback": false, 00:19:58.911 "generate_uuids": false, 00:19:58.911 "transport_tos": 0, 00:19:58.911 "nvme_error_stat": false, 00:19:58.911 "rdma_srq_size": 0, 00:19:58.911 "io_path_stat": false, 00:19:58.911 "allow_accel_sequence": false, 00:19:58.911 "rdma_max_cq_size": 0, 00:19:58.911 "rdma_cm_event_timeout_ms": 0, 00:19:58.911 "dhchap_digests": [ 00:19:58.911 "sha256", 00:19:58.911 "sha384", 00:19:58.911 "sha512" 00:19:58.911 ], 00:19:58.911 "dhchap_dhgroups": [ 00:19:58.911 "null", 00:19:58.911 "ffdhe2048", 00:19:58.911 "ffdhe3072", 00:19:58.911 "ffdhe4096", 00:19:58.911 "ffdhe6144", 00:19:58.911 "ffdhe8192" 00:19:58.911 ] 00:19:58.911 } 00:19:58.911 }, 00:19:58.911 { 00:19:58.911 "method": "bdev_nvme_set_hotplug", 00:19:58.911 "params": { 00:19:58.911 "period_us": 100000, 00:19:58.911 "enable": false 00:19:58.911 } 00:19:58.911 }, 00:19:58.911 { 00:19:58.911 "method": "bdev_malloc_create", 00:19:58.911 "params": { 00:19:58.911 "name": "malloc0", 00:19:58.911 "num_blocks": 8192, 00:19:58.911 "block_size": 4096, 00:19:58.911 "physical_block_size": 4096, 00:19:58.911 "uuid": "6d0c144b-b1dc-4580-9bf8-de0e1b2aad10", 00:19:58.911 "optimal_io_boundary": 0, 00:19:58.911 "md_size": 0, 00:19:58.911 "dif_type": 0, 00:19:58.911 "dif_is_head_of_md": false, 00:19:58.911 "dif_pi_format": 0 00:19:58.911 } 00:19:58.911 }, 00:19:58.911 { 00:19:58.912 "method": "bdev_wait_for_examine" 00:19:58.912 } 00:19:58.912 ] 00:19:58.912 }, 00:19:58.912 { 00:19:58.912 "subsystem": "nbd", 00:19:58.912 "config": [] 00:19:58.912 }, 00:19:58.912 { 00:19:58.912 "subsystem": "scheduler", 00:19:58.912 "config": [ 00:19:58.912 { 00:19:58.912 "method": "framework_set_scheduler", 00:19:58.912 "params": { 00:19:58.912 "name": "static" 00:19:58.912 } 00:19:58.912 } 00:19:58.912 ] 00:19:58.912 }, 00:19:58.912 { 00:19:58.912 "subsystem": "nvmf", 00:19:58.912 "config": [ 00:19:58.912 { 00:19:58.912 "method": "nvmf_set_config", 00:19:58.912 "params": { 00:19:58.912 "discovery_filter": "match_any", 00:19:58.912 "admin_cmd_passthru": { 00:19:58.912 "identify_ctrlr": false 00:19:58.912 } 00:19:58.912 } 00:19:58.912 }, 00:19:58.912 { 00:19:58.912 "method": "nvmf_set_max_subsystems", 00:19:58.912 "params": { 00:19:58.912 "max_subsystems": 1024 00:19:58.912 } 00:19:58.912 }, 00:19:58.912 { 00:19:58.912 "method": "nvmf_set_crdt", 00:19:58.912 "params": { 00:19:58.912 "crdt1": 0, 00:19:58.912 "crdt2": 0, 00:19:58.912 "crdt3": 0 00:19:58.912 } 00:19:58.912 }, 00:19:58.912 { 00:19:58.912 "method": "nvmf_create_transport", 00:19:58.912 "params": { 00:19:58.912 "trtype": "TCP", 00:19:58.912 "max_queue_depth": 128, 00:19:58.912 "max_io_qpairs_per_ctrlr": 127, 00:19:58.912 "in_capsule_data_size": 4096, 00:19:58.912 "max_io_size": 131072, 00:19:58.912 "io_unit_size": 131072, 00:19:58.912 "max_aq_depth": 128, 00:19:58.912 "num_shared_buffers": 511, 00:19:58.912 "buf_cache_size": 4294967295, 00:19:58.912 "dif_insert_or_strip": false, 00:19:58.912 "zcopy": false, 00:19:58.912 "c2h_success": false, 00:19:58.912 "sock_priority": 0, 00:19:58.912 "abort_timeout_sec": 1, 00:19:58.912 "ack_timeout": 0, 00:19:58.912 "data_wr_pool_size": 0 00:19:58.912 } 00:19:58.912 }, 00:19:58.912 { 00:19:58.912 "method": "nvmf_create_subsystem", 00:19:58.912 "params": { 00:19:58.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.912 "allow_any_host": false, 00:19:58.912 "serial_number": "00000000000000000000", 00:19:58.912 "model_number": "SPDK bdev Controller", 00:19:58.912 "max_namespaces": 32, 00:19:58.912 "min_cntlid": 1, 00:19:58.912 "max_cntlid": 65519, 00:19:58.912 "ana_reporting": false 00:19:58.912 } 00:19:58.912 }, 00:19:58.912 { 00:19:58.912 "method": "nvmf_subsystem_add_host", 00:19:58.912 "params": { 00:19:58.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.912 "host": "nqn.2016-06.io.spdk:host1", 00:19:58.912 "psk": "key0" 00:19:58.912 } 00:19:58.912 }, 00:19:58.912 { 00:19:58.912 "method": "nvmf_subsystem_add_ns", 00:19:58.912 "params": { 00:19:58.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.912 "namespace": { 00:19:58.912 "nsid": 1, 00:19:58.912 "bdev_name": "malloc0", 00:19:58.912 "nguid": "6D0C144BB1DC45809BF8DE0E1B2AAD10", 00:19:58.912 "uuid": "6d0c144b-b1dc-4580-9bf8-de0e1b2aad10", 00:19:58.912 "no_auto_visible": false 00:19:58.912 } 00:19:58.912 } 00:19:58.912 }, 00:19:58.912 { 00:19:58.912 "method": "nvmf_subsystem_add_listener", 00:19:58.912 "params": { 00:19:58.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.912 "listen_address": { 00:19:58.912 "trtype": "TCP", 00:19:58.912 "adrfam": "IPv4", 00:19:58.912 "traddr": "10.0.0.2", 00:19:58.912 "trsvcid": "4420" 00:19:58.912 }, 00:19:58.912 "secure_channel": false, 00:19:58.912 "sock_impl": "ssl" 00:19:58.912 } 00:19:58.912 } 00:19:58.912 ] 00:19:58.912 } 00:19:58.912 ] 00:19:58.912 }' 00:19:58.912 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:59.171 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:59.171 "subsystems": [ 00:19:59.171 { 00:19:59.171 "subsystem": "keyring", 00:19:59.171 "config": [ 00:19:59.171 { 00:19:59.171 "method": "keyring_file_add_key", 00:19:59.171 "params": { 00:19:59.171 "name": "key0", 00:19:59.171 "path": "/tmp/tmp.pO6GwbkNDs" 00:19:59.171 } 00:19:59.171 } 00:19:59.171 ] 00:19:59.171 }, 00:19:59.171 { 00:19:59.171 "subsystem": "iobuf", 00:19:59.171 "config": [ 00:19:59.171 { 00:19:59.171 "method": "iobuf_set_options", 00:19:59.171 "params": { 00:19:59.171 "small_pool_count": 8192, 00:19:59.171 "large_pool_count": 1024, 00:19:59.171 "small_bufsize": 8192, 00:19:59.171 "large_bufsize": 135168 00:19:59.171 } 00:19:59.171 } 00:19:59.171 ] 00:19:59.171 }, 00:19:59.171 { 00:19:59.171 "subsystem": "sock", 00:19:59.171 "config": [ 00:19:59.171 { 00:19:59.171 "method": "sock_set_default_impl", 00:19:59.171 "params": { 00:19:59.171 "impl_name": "posix" 00:19:59.171 } 00:19:59.171 }, 00:19:59.171 { 00:19:59.171 "method": "sock_impl_set_options", 00:19:59.171 "params": { 00:19:59.171 "impl_name": "ssl", 00:19:59.171 "recv_buf_size": 4096, 00:19:59.171 "send_buf_size": 4096, 00:19:59.171 "enable_recv_pipe": true, 00:19:59.171 "enable_quickack": false, 00:19:59.171 "enable_placement_id": 0, 00:19:59.171 "enable_zerocopy_send_server": true, 00:19:59.171 "enable_zerocopy_send_client": false, 00:19:59.171 "zerocopy_threshold": 0, 00:19:59.171 "tls_version": 0, 00:19:59.171 "enable_ktls": false 00:19:59.171 } 00:19:59.171 }, 00:19:59.171 { 00:19:59.171 "method": "sock_impl_set_options", 00:19:59.171 "params": { 00:19:59.171 "impl_name": "posix", 00:19:59.171 "recv_buf_size": 2097152, 00:19:59.171 "send_buf_size": 2097152, 00:19:59.171 "enable_recv_pipe": true, 00:19:59.171 "enable_quickack": false, 00:19:59.171 "enable_placement_id": 0, 00:19:59.171 "enable_zerocopy_send_server": true, 00:19:59.171 "enable_zerocopy_send_client": false, 00:19:59.171 "zerocopy_threshold": 0, 00:19:59.171 "tls_version": 0, 00:19:59.171 "enable_ktls": false 00:19:59.171 } 00:19:59.171 } 00:19:59.171 ] 00:19:59.171 }, 00:19:59.171 { 00:19:59.171 "subsystem": "vmd", 00:19:59.171 "config": [] 00:19:59.171 }, 00:19:59.171 { 00:19:59.171 "subsystem": "accel", 00:19:59.171 "config": [ 00:19:59.171 { 00:19:59.171 "method": "accel_set_options", 00:19:59.171 "params": { 00:19:59.171 "small_cache_size": 128, 00:19:59.171 "large_cache_size": 16, 00:19:59.171 "task_count": 2048, 00:19:59.171 "sequence_count": 2048, 00:19:59.171 "buf_count": 2048 00:19:59.171 } 00:19:59.171 } 00:19:59.171 ] 00:19:59.171 }, 00:19:59.171 { 00:19:59.171 "subsystem": "bdev", 00:19:59.171 "config": [ 00:19:59.171 { 00:19:59.171 "method": "bdev_set_options", 00:19:59.171 "params": { 00:19:59.171 "bdev_io_pool_size": 65535, 00:19:59.171 "bdev_io_cache_size": 256, 00:19:59.172 "bdev_auto_examine": true, 00:19:59.172 "iobuf_small_cache_size": 128, 00:19:59.172 "iobuf_large_cache_size": 16 00:19:59.172 } 00:19:59.172 }, 00:19:59.172 { 00:19:59.172 "method": "bdev_raid_set_options", 00:19:59.172 "params": { 00:19:59.172 "process_window_size_kb": 1024, 00:19:59.172 "process_max_bandwidth_mb_sec": 0 00:19:59.172 } 00:19:59.172 }, 00:19:59.172 { 00:19:59.172 "method": "bdev_iscsi_set_options", 00:19:59.172 "params": { 00:19:59.172 "timeout_sec": 30 00:19:59.172 } 00:19:59.172 }, 00:19:59.172 { 00:19:59.172 "method": "bdev_nvme_set_options", 00:19:59.172 "params": { 00:19:59.172 "action_on_timeout": "none", 00:19:59.172 "timeout_us": 0, 00:19:59.172 "timeout_admin_us": 0, 00:19:59.172 "keep_alive_timeout_ms": 10000, 00:19:59.172 "arbitration_burst": 0, 00:19:59.172 "low_priority_weight": 0, 00:19:59.172 "medium_priority_weight": 0, 00:19:59.172 "high_priority_weight": 0, 00:19:59.172 "nvme_adminq_poll_period_us": 10000, 00:19:59.172 "nvme_ioq_poll_period_us": 0, 00:19:59.172 "io_queue_requests": 512, 00:19:59.172 "delay_cmd_submit": true, 00:19:59.172 "transport_retry_count": 4, 00:19:59.172 "bdev_retry_count": 3, 00:19:59.172 "transport_ack_timeout": 0, 00:19:59.172 "ctrlr_loss_timeout_sec": 0, 00:19:59.172 "reconnect_delay_sec": 0, 00:19:59.172 "fast_io_fail_timeout_sec": 0, 00:19:59.172 "disable_auto_failback": false, 00:19:59.172 "generate_uuids": false, 00:19:59.172 "transport_tos": 0, 00:19:59.172 "nvme_error_stat": false, 00:19:59.172 "rdma_srq_size": 0, 00:19:59.172 "io_path_stat": false, 00:19:59.172 "allow_accel_sequence": false, 00:19:59.172 "rdma_max_cq_size": 0, 00:19:59.172 "rdma_cm_event_timeout_ms": 0, 00:19:59.172 "dhchap_digests": [ 00:19:59.172 "sha256", 00:19:59.172 "sha384", 00:19:59.172 "sha512" 00:19:59.172 ], 00:19:59.172 "dhchap_dhgroups": [ 00:19:59.172 "null", 00:19:59.172 "ffdhe2048", 00:19:59.172 "ffdhe3072", 00:19:59.172 "ffdhe4096", 00:19:59.172 "ffdhe6144", 00:19:59.172 "ffdhe8192" 00:19:59.172 ] 00:19:59.172 } 00:19:59.172 }, 00:19:59.172 { 00:19:59.172 "method": "bdev_nvme_attach_controller", 00:19:59.172 "params": { 00:19:59.172 "name": "nvme0", 00:19:59.172 "trtype": "TCP", 00:19:59.172 "adrfam": "IPv4", 00:19:59.172 "traddr": "10.0.0.2", 00:19:59.172 "trsvcid": "4420", 00:19:59.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.172 "prchk_reftag": false, 00:19:59.172 "prchk_guard": false, 00:19:59.172 "ctrlr_loss_timeout_sec": 0, 00:19:59.172 "reconnect_delay_sec": 0, 00:19:59.172 "fast_io_fail_timeout_sec": 0, 00:19:59.172 "psk": "key0", 00:19:59.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.172 "hdgst": false, 00:19:59.172 "ddgst": false 00:19:59.172 } 00:19:59.172 }, 00:19:59.172 { 00:19:59.172 "method": "bdev_nvme_set_hotplug", 00:19:59.172 "params": { 00:19:59.172 "period_us": 100000, 00:19:59.172 "enable": false 00:19:59.172 } 00:19:59.172 }, 00:19:59.172 { 00:19:59.172 "method": "bdev_enable_histogram", 00:19:59.172 "params": { 00:19:59.172 "name": "nvme0n1", 00:19:59.172 "enable": true 00:19:59.172 } 00:19:59.172 }, 00:19:59.172 { 00:19:59.172 "method": "bdev_wait_for_examine" 00:19:59.172 } 00:19:59.172 ] 00:19:59.172 }, 00:19:59.172 { 00:19:59.172 "subsystem": "nbd", 00:19:59.172 "config": [] 00:19:59.172 } 00:19:59.172 ] 00:19:59.172 }' 00:19:59.172 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1559918 00:19:59.172 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1559918 ']' 00:19:59.172 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1559918 00:19:59.172 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:59.172 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.172 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1559918 00:19:59.172 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:59.172 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:59.172 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1559918' 00:19:59.172 killing process with pid 1559918 00:19:59.172 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1559918 00:19:59.172 Received shutdown signal, test time was about 1.000000 seconds 00:19:59.172 00:19:59.172 Latency(us) 00:19:59.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.172 =================================================================================================================== 00:19:59.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.172 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1559918 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1559802 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1559802 ']' 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1559802 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1559802 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1559802' 00:19:59.432 killing process with pid 1559802 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1559802 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1559802 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.432 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:59.432 "subsystems": [ 00:19:59.432 { 00:19:59.432 "subsystem": "keyring", 00:19:59.432 "config": [ 00:19:59.432 { 00:19:59.432 "method": "keyring_file_add_key", 00:19:59.432 "params": { 00:19:59.432 "name": "key0", 00:19:59.432 "path": "/tmp/tmp.pO6GwbkNDs" 00:19:59.432 } 00:19:59.432 } 00:19:59.432 ] 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "subsystem": "iobuf", 00:19:59.432 "config": [ 00:19:59.432 { 00:19:59.432 "method": "iobuf_set_options", 00:19:59.432 "params": { 00:19:59.432 "small_pool_count": 8192, 00:19:59.432 "large_pool_count": 1024, 00:19:59.432 "small_bufsize": 8192, 00:19:59.432 "large_bufsize": 135168 00:19:59.432 } 00:19:59.432 } 00:19:59.432 ] 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "subsystem": "sock", 00:19:59.432 "config": [ 00:19:59.432 { 00:19:59.432 "method": "sock_set_default_impl", 00:19:59.432 "params": { 00:19:59.432 "impl_name": "posix" 00:19:59.432 } 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "method": "sock_impl_set_options", 00:19:59.432 "params": { 00:19:59.432 "impl_name": "ssl", 00:19:59.432 "recv_buf_size": 4096, 00:19:59.432 "send_buf_size": 4096, 00:19:59.432 "enable_recv_pipe": true, 00:19:59.432 "enable_quickack": false, 00:19:59.432 "enable_placement_id": 0, 00:19:59.432 "enable_zerocopy_send_server": true, 00:19:59.432 "enable_zerocopy_send_client": false, 00:19:59.432 "zerocopy_threshold": 0, 00:19:59.432 "tls_version": 0, 00:19:59.432 "enable_ktls": false 00:19:59.432 } 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "method": "sock_impl_set_options", 00:19:59.432 "params": { 00:19:59.432 "impl_name": "posix", 00:19:59.432 "recv_buf_size": 2097152, 00:19:59.432 "send_buf_size": 2097152, 00:19:59.432 "enable_recv_pipe": true, 00:19:59.432 "enable_quickack": false, 00:19:59.432 "enable_placement_id": 0, 00:19:59.432 "enable_zerocopy_send_server": true, 00:19:59.432 "enable_zerocopy_send_client": false, 00:19:59.432 "zerocopy_threshold": 0, 00:19:59.432 "tls_version": 0, 00:19:59.432 "enable_ktls": false 00:19:59.432 } 00:19:59.432 } 00:19:59.432 ] 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "subsystem": "vmd", 00:19:59.432 "config": [] 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "subsystem": "accel", 00:19:59.432 "config": [ 00:19:59.432 { 00:19:59.432 "method": "accel_set_options", 00:19:59.432 "params": { 00:19:59.432 "small_cache_size": 128, 00:19:59.432 "large_cache_size": 16, 00:19:59.432 "task_count": 2048, 00:19:59.432 "sequence_count": 2048, 00:19:59.432 "buf_count": 2048 00:19:59.432 } 00:19:59.432 } 00:19:59.432 ] 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "subsystem": "bdev", 00:19:59.432 "config": [ 00:19:59.432 { 00:19:59.432 "method": "bdev_set_options", 00:19:59.432 "params": { 00:19:59.432 "bdev_io_pool_size": 65535, 00:19:59.432 "bdev_io_cache_size": 256, 00:19:59.432 "bdev_auto_examine": true, 00:19:59.432 "iobuf_small_cache_size": 128, 00:19:59.432 "iobuf_large_cache_size": 16 00:19:59.432 } 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "method": "bdev_raid_set_options", 00:19:59.432 "params": { 00:19:59.432 "process_window_size_kb": 1024, 00:19:59.432 "process_max_bandwidth_mb_sec": 0 00:19:59.432 } 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "method": "bdev_iscsi_set_options", 00:19:59.432 "params": { 00:19:59.432 "timeout_sec": 30 00:19:59.432 } 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "method": "bdev_nvme_set_options", 00:19:59.432 "params": { 00:19:59.432 "action_on_timeout": "none", 00:19:59.432 "timeout_us": 0, 00:19:59.432 "timeout_admin_us": 0, 00:19:59.432 "keep_alive_timeout_ms": 10000, 00:19:59.432 "arbitration_burst": 0, 00:19:59.432 "low_priority_weight": 0, 00:19:59.432 "medium_priority_weight": 0, 00:19:59.432 "high_priority_weight": 0, 00:19:59.432 "nvme_adminq_poll_period_us": 10000, 00:19:59.432 "nvme_ioq_poll_period_us": 0, 00:19:59.432 "io_queue_requests": 0, 00:19:59.432 "delay_cmd_submit": true, 00:19:59.432 "transport_retry_count": 4, 00:19:59.432 "bdev_retry_count": 3, 00:19:59.432 "transport_ack_timeout": 0, 00:19:59.432 "ctrlr_loss_timeout_sec": 0, 00:19:59.432 "reconnect_delay_sec": 0, 00:19:59.432 "fast_io_fail_timeout_sec": 0, 00:19:59.432 "disable_auto_failback": false, 00:19:59.432 "generate_uuids": false, 00:19:59.432 "transport_tos": 0, 00:19:59.432 "nvme_error_stat": false, 00:19:59.432 "rdma_srq_size": 0, 00:19:59.432 "io_path_stat": false, 00:19:59.432 "allow_accel_sequence": false, 00:19:59.432 "rdma_max_cq_size": 0, 00:19:59.432 "rdma_cm_event_timeout_ms": 0, 00:19:59.432 "dhchap_digests": [ 00:19:59.432 "sha256", 00:19:59.432 "sha384", 00:19:59.432 "sha512" 00:19:59.432 ], 00:19:59.432 "dhchap_dhgroups": [ 00:19:59.432 "null", 00:19:59.432 "ffdhe2048", 00:19:59.432 "ffdhe3072", 00:19:59.432 "ffdhe4096", 00:19:59.432 "ffdhe6144", 00:19:59.432 "ffdhe8192" 00:19:59.432 ] 00:19:59.432 } 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "method": "bdev_nvme_set_hotplug", 00:19:59.432 "params": { 00:19:59.432 "period_us": 100000, 00:19:59.432 "enable": false 00:19:59.432 } 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "method": "bdev_malloc_create", 00:19:59.432 "params": { 00:19:59.432 "name": "malloc0", 00:19:59.432 "num_blocks": 8192, 00:19:59.432 "block_size": 4096, 00:19:59.432 "physical_block_size": 4096, 00:19:59.432 "uuid": "6d0c144b-b1dc-4580-9bf8-de0e1b2aad10", 00:19:59.432 "optimal_io_boundary": 0, 00:19:59.432 "md_size": 0, 00:19:59.432 "dif_type": 0, 00:19:59.432 "dif_is_head_of_md": false, 00:19:59.432 "dif_pi_format": 0 00:19:59.432 } 00:19:59.432 }, 00:19:59.432 { 00:19:59.432 "method": "bdev_wait_for_examine" 00:19:59.432 } 00:19:59.432 ] 00:19:59.432 }, 00:19:59.432 { 00:19:59.433 "subsystem": "nbd", 00:19:59.433 "config": [] 00:19:59.433 }, 00:19:59.433 { 00:19:59.433 "subsystem": "scheduler", 00:19:59.433 "config": [ 00:19:59.433 { 00:19:59.433 "method": "framework_set_scheduler", 00:19:59.433 "params": { 00:19:59.433 "name": "static" 00:19:59.433 } 00:19:59.433 } 00:19:59.433 ] 00:19:59.433 }, 00:19:59.433 { 00:19:59.433 "subsystem": "nvmf", 00:19:59.433 "config": [ 00:19:59.433 { 00:19:59.433 "method": "nvmf_set_config", 00:19:59.433 "params": { 00:19:59.433 "discovery_filter": "match_any", 00:19:59.433 "admin_cmd_passthru": { 00:19:59.433 "identify_ctrlr": false 00:19:59.433 } 00:19:59.433 } 00:19:59.433 }, 00:19:59.433 { 00:19:59.433 "method": "nvmf_set_max_subsystems", 00:19:59.433 "params": { 00:19:59.433 "max_subsystems": 1024 00:19:59.433 } 00:19:59.433 }, 00:19:59.433 { 00:19:59.433 "method": "nvmf_set_crdt", 00:19:59.433 "params": { 00:19:59.433 "crdt1": 0, 00:19:59.433 "crdt2": 0, 00:19:59.433 "crdt3": 0 00:19:59.433 } 00:19:59.433 }, 00:19:59.433 { 00:19:59.433 "method": "nvmf_create_transport", 00:19:59.433 "params": { 00:19:59.433 "trtype": "TCP", 00:19:59.433 "max_queue_depth": 128, 00:19:59.433 "max_io_qpairs_per_ctrlr": 127, 00:19:59.433 "in_capsule_data_size": 4096, 00:19:59.433 "max_io_size": 131072, 00:19:59.433 "io_unit_size": 131072, 00:19:59.433 "max_aq_depth": 128, 00:19:59.433 "num_shared_buffers": 511, 00:19:59.433 "buf_cache_size": 4294967295, 00:19:59.433 "dif_insert_or_strip": false, 00:19:59.433 "zcopy": false, 00:19:59.433 "c2h_success": false, 00:19:59.433 "sock_priority": 0, 00:19:59.433 "abort_timeout_sec": 1, 00:19:59.433 "ack_timeout": 0, 00:19:59.433 "data_wr_pool_size": 0 00:19:59.433 } 00:19:59.433 }, 00:19:59.433 { 00:19:59.433 "method": "nvmf_create_subsystem", 00:19:59.433 "params": { 00:19:59.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.433 "allow_any_host": false, 00:19:59.433 "serial_number": "00000000000000000000", 00:19:59.433 "model_number": "SPDK bdev Controller", 00:19:59.433 "max_namespaces": 32, 00:19:59.433 "min_cntlid": 1, 00:19:59.433 "max_cntlid": 65519, 00:19:59.433 "ana_reporting": false 00:19:59.433 } 00:19:59.433 }, 00:19:59.433 { 00:19:59.433 "method": "nvmf_subsystem_add_host", 00:19:59.433 "params": { 00:19:59.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.433 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.433 "psk": "key0" 00:19:59.433 } 00:19:59.433 }, 00:19:59.433 { 00:19:59.433 "method": "nvmf_subsystem_add_ns", 00:19:59.433 "params": { 00:19:59.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.433 "namespace": { 00:19:59.433 "nsid": 1, 00:19:59.433 "bdev_name": "malloc0", 00:19:59.433 "nguid": "6D0C144BB1DC45809BF8DE0E1B2AAD10", 00:19:59.433 "uuid": "6d0c144b-b1dc-4580-9bf8-de0e1b2aad10", 00:19:59.433 "no_auto_visible": false 00:19:59.433 } 00:19:59.433 } 00:19:59.433 }, 00:19:59.433 { 00:19:59.433 "method": "nvmf_subsystem_add_listener", 00:19:59.433 "params": { 00:19:59.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.433 "listen_address": { 00:19:59.433 "trtype": "TCP", 00:19:59.433 "adrfam": "IPv4", 00:19:59.433 "traddr": "10.0.0.2", 00:19:59.433 "trsvcid": "4420" 00:19:59.433 }, 00:19:59.433 "secure_channel": false, 00:19:59.433 "sock_impl": "ssl" 00:19:59.433 } 00:19:59.433 } 00:19:59.433 ] 00:19:59.433 } 00:19:59.433 ] 00:19:59.433 }' 00:19:59.433 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1560460 00:19:59.433 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1560460 00:19:59.433 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1560460 ']' 00:19:59.433 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.433 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:59.433 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.433 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:59.433 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.433 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:59.692 [2024-07-24 19:20:45.710339] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:59.692 [2024-07-24 19:20:45.710389] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.692 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.692 [2024-07-24 19:20:45.783917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.692 [2024-07-24 19:20:45.856559] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.692 [2024-07-24 19:20:45.856594] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.692 [2024-07-24 19:20:45.856604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.692 [2024-07-24 19:20:45.856614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.692 [2024-07-24 19:20:45.856622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.692 [2024-07-24 19:20:45.856674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.950 [2024-07-24 19:20:46.067731] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.950 [2024-07-24 19:20:46.108619] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:59.950 [2024-07-24 19:20:46.108803] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1560652 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1560652 /var/tmp/bdevperf.sock 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1560652 ']' 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.519 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:00.519 "subsystems": [ 00:20:00.519 { 00:20:00.519 "subsystem": "keyring", 00:20:00.519 "config": [ 00:20:00.519 { 00:20:00.519 "method": "keyring_file_add_key", 00:20:00.519 "params": { 00:20:00.519 "name": "key0", 00:20:00.519 "path": "/tmp/tmp.pO6GwbkNDs" 00:20:00.519 } 00:20:00.519 } 00:20:00.519 ] 00:20:00.519 }, 00:20:00.519 { 00:20:00.519 "subsystem": "iobuf", 00:20:00.519 "config": [ 00:20:00.519 { 00:20:00.519 "method": "iobuf_set_options", 00:20:00.519 "params": { 00:20:00.519 "small_pool_count": 8192, 00:20:00.519 "large_pool_count": 1024, 00:20:00.519 "small_bufsize": 8192, 00:20:00.519 "large_bufsize": 135168 00:20:00.519 } 00:20:00.519 } 00:20:00.519 ] 00:20:00.519 }, 00:20:00.519 { 00:20:00.519 "subsystem": "sock", 00:20:00.519 "config": [ 00:20:00.519 { 00:20:00.519 "method": "sock_set_default_impl", 00:20:00.519 "params": { 00:20:00.519 "impl_name": "posix" 00:20:00.519 } 00:20:00.519 }, 00:20:00.519 { 00:20:00.519 "method": "sock_impl_set_options", 00:20:00.519 "params": { 00:20:00.519 "impl_name": "ssl", 00:20:00.519 "recv_buf_size": 4096, 00:20:00.519 "send_buf_size": 4096, 00:20:00.519 "enable_recv_pipe": true, 00:20:00.519 "enable_quickack": false, 00:20:00.519 "enable_placement_id": 0, 00:20:00.519 "enable_zerocopy_send_server": true, 00:20:00.519 "enable_zerocopy_send_client": false, 00:20:00.519 "zerocopy_threshold": 0, 00:20:00.519 "tls_version": 0, 00:20:00.519 "enable_ktls": false 00:20:00.519 } 00:20:00.519 }, 00:20:00.519 { 00:20:00.519 "method": "sock_impl_set_options", 00:20:00.519 "params": { 00:20:00.519 "impl_name": "posix", 00:20:00.519 "recv_buf_size": 2097152, 00:20:00.519 "send_buf_size": 2097152, 00:20:00.519 "enable_recv_pipe": true, 00:20:00.519 "enable_quickack": false, 00:20:00.519 "enable_placement_id": 0, 00:20:00.519 "enable_zerocopy_send_server": true, 00:20:00.519 "enable_zerocopy_send_client": false, 00:20:00.519 "zerocopy_threshold": 0, 00:20:00.519 "tls_version": 0, 00:20:00.520 "enable_ktls": false 00:20:00.520 } 00:20:00.520 } 00:20:00.520 ] 00:20:00.520 }, 00:20:00.520 { 00:20:00.520 "subsystem": "vmd", 00:20:00.520 "config": [] 00:20:00.520 }, 00:20:00.520 { 00:20:00.520 "subsystem": "accel", 00:20:00.520 "config": [ 00:20:00.520 { 00:20:00.520 "method": "accel_set_options", 00:20:00.520 "params": { 00:20:00.520 "small_cache_size": 128, 00:20:00.520 "large_cache_size": 16, 00:20:00.520 "task_count": 2048, 00:20:00.520 "sequence_count": 2048, 00:20:00.520 "buf_count": 2048 00:20:00.520 } 00:20:00.520 } 00:20:00.520 ] 00:20:00.520 }, 00:20:00.520 { 00:20:00.520 "subsystem": "bdev", 00:20:00.520 "config": [ 00:20:00.520 { 00:20:00.520 "method": "bdev_set_options", 00:20:00.520 "params": { 00:20:00.520 "bdev_io_pool_size": 65535, 00:20:00.520 "bdev_io_cache_size": 256, 00:20:00.520 "bdev_auto_examine": true, 00:20:00.520 "iobuf_small_cache_size": 128, 00:20:00.520 "iobuf_large_cache_size": 16 00:20:00.520 } 00:20:00.520 }, 00:20:00.520 { 00:20:00.520 "method": "bdev_raid_set_options", 00:20:00.520 "params": { 00:20:00.520 "process_window_size_kb": 1024, 00:20:00.520 "process_max_bandwidth_mb_sec": 0 00:20:00.520 } 00:20:00.520 }, 00:20:00.520 { 00:20:00.520 "method": "bdev_iscsi_set_options", 00:20:00.520 "params": { 00:20:00.520 "timeout_sec": 30 00:20:00.520 } 00:20:00.520 }, 00:20:00.520 { 00:20:00.520 "method": "bdev_nvme_set_options", 00:20:00.520 "params": { 00:20:00.520 "action_on_timeout": "none", 00:20:00.520 "timeout_us": 0, 00:20:00.520 "timeout_admin_us": 0, 00:20:00.520 "keep_alive_timeout_ms": 10000, 00:20:00.520 "arbitration_burst": 0, 00:20:00.520 "low_priority_weight": 0, 00:20:00.520 "medium_priority_weight": 0, 00:20:00.520 "high_priority_weight": 0, 00:20:00.520 "nvme_adminq_poll_period_us": 10000, 00:20:00.520 "nvme_ioq_poll_period_us": 0, 00:20:00.520 "io_queue_requests": 512, 00:20:00.520 "delay_cmd_submit": true, 00:20:00.520 "transport_retry_count": 4, 00:20:00.520 "bdev_retry_count": 3, 00:20:00.520 "transport_ack_timeout": 0, 00:20:00.520 "ctrlr_loss_timeout_sec": 0, 00:20:00.520 "reconnect_delay_sec": 0, 00:20:00.520 "fast_io_fail_timeout_sec": 0, 00:20:00.520 "disable_auto_failback": false, 00:20:00.520 "generate_uuids": false, 00:20:00.520 "transport_tos": 0, 00:20:00.520 "nvme_error_stat": false, 00:20:00.520 "rdma_srq_size": 0, 00:20:00.520 "io_path_stat": false, 00:20:00.520 "allow_accel_sequence": false, 00:20:00.520 "rdma_max_cq_size": 0, 00:20:00.520 "rdma_cm_event_timeout_ms": 0, 00:20:00.520 "dhchap_digests": [ 00:20:00.520 "sha256", 00:20:00.520 "sha384", 00:20:00.520 "sha512" 00:20:00.520 ], 00:20:00.520 "dhchap_dhgroups": [ 00:20:00.520 "null", 00:20:00.520 "ffdhe2048", 00:20:00.520 "ffdhe3072", 00:20:00.520 "ffdhe4096", 00:20:00.520 "ffdhe6144", 00:20:00.520 "ffdhe8192" 00:20:00.520 ] 00:20:00.520 } 00:20:00.520 }, 00:20:00.520 { 00:20:00.520 "method": "bdev_nvme_attach_controller", 00:20:00.520 "params": { 00:20:00.520 "name": "nvme0", 00:20:00.520 "trtype": "TCP", 00:20:00.520 "adrfam": "IPv4", 00:20:00.520 "traddr": "10.0.0.2", 00:20:00.520 "trsvcid": "4420", 00:20:00.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.520 "prchk_reftag": false, 00:20:00.520 "prchk_guard": false, 00:20:00.520 "ctrlr_loss_timeout_sec": 0, 00:20:00.520 "reconnect_delay_sec": 0, 00:20:00.520 "fast_io_fail_timeout_sec": 0, 00:20:00.520 "psk": "key0", 00:20:00.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.520 "hdgst": false, 00:20:00.520 "ddgst": false 00:20:00.520 } 00:20:00.520 }, 00:20:00.520 { 00:20:00.520 "method": "bdev_nvme_set_hotplug", 00:20:00.520 "params": { 00:20:00.520 "period_us": 100000, 00:20:00.520 "enable": false 00:20:00.520 } 00:20:00.520 }, 00:20:00.520 { 00:20:00.520 "method": "bdev_enable_histogram", 00:20:00.520 "params": { 00:20:00.520 "name": "nvme0n1", 00:20:00.520 "enable": true 00:20:00.520 } 00:20:00.520 }, 00:20:00.520 { 00:20:00.520 "method": "bdev_wait_for_examine" 00:20:00.520 } 00:20:00.520 ] 00:20:00.520 }, 00:20:00.520 { 00:20:00.520 "subsystem": "nbd", 00:20:00.520 "config": [] 00:20:00.520 } 00:20:00.520 ] 00:20:00.520 }' 00:20:00.520 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.520 [2024-07-24 19:20:46.592584] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:00.520 [2024-07-24 19:20:46.592636] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1560652 ] 00:20:00.520 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.520 [2024-07-24 19:20:46.663832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.520 [2024-07-24 19:20:46.733106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.779 [2024-07-24 19:20:46.883930] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.348 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.348 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:01.348 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:01.348 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:01.348 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.348 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.620 Running I/O for 1 seconds... 00:20:02.557 00:20:02.557 Latency(us) 00:20:02.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.557 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:02.557 Verification LBA range: start 0x0 length 0x2000 00:20:02.557 nvme0n1 : 1.02 5117.54 19.99 0.00 0.00 24743.68 4718.59 45508.20 00:20:02.557 =================================================================================================================== 00:20:02.557 Total : 5117.54 19.99 0.00 0.00 24743.68 4718.59 45508.20 00:20:02.557 0 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:02.557 nvmf_trace.0 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1560652 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1560652 ']' 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1560652 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:02.557 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1560652 00:20:02.816 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:02.816 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:02.816 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1560652' 00:20:02.816 killing process with pid 1560652 00:20:02.816 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1560652 00:20:02.816 Received shutdown signal, test time was about 1.000000 seconds 00:20:02.816 00:20:02.816 Latency(us) 00:20:02.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.816 =================================================================================================================== 00:20:02.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.816 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1560652 00:20:02.816 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:02.816 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:02.816 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:02.816 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.816 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:02.816 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.816 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.816 rmmod nvme_tcp 00:20:02.816 rmmod nvme_fabrics 00:20:03.075 rmmod nvme_keyring 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1560460 ']' 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1560460 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1560460 ']' 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1560460 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1560460 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1560460' 00:20:03.075 killing process with pid 1560460 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1560460 00:20:03.075 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1560460 00:20:03.334 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:03.334 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:03.334 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:03.334 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:03.334 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:03.334 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.334 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.334 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.238 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:05.238 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3SMnuh7Gi7 /tmp/tmp.DXWvmcdUYX /tmp/tmp.pO6GwbkNDs 00:20:05.238 00:20:05.238 real 1m25.902s 00:20:05.238 user 2m5.620s 00:20:05.238 sys 0m35.335s 00:20:05.238 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:05.238 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.238 ************************************ 00:20:05.238 END TEST nvmf_tls 00:20:05.238 ************************************ 00:20:05.238 19:20:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:05.238 19:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:05.238 19:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:05.238 19:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:05.498 ************************************ 00:20:05.498 START TEST nvmf_fips 00:20:05.498 ************************************ 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:05.498 * Looking for test storage... 00:20:05.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:05.498 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:05.499 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:05.758 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:05.759 Error setting digest 00:20:05.759 0092D4DB877F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:05.759 0092D4DB877F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:05.759 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:12.326 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:12.326 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:12.326 Found net devices under 0000:af:00.0: cvl_0_0 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:12.326 Found net devices under 0000:af:00.1: cvl_0_1 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.326 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:20:12.327 00:20:12.327 --- 10.0.0.2 ping statistics --- 00:20:12.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.327 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:20:12.327 00:20:12.327 --- 10.0.0.1 ping statistics --- 00:20:12.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.327 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1564836 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1564836 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1564836 ']' 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.327 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.655 [2024-07-24 19:20:58.597369] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:12.655 [2024-07-24 19:20:58.597424] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.655 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.655 [2024-07-24 19:20:58.670976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.655 [2024-07-24 19:20:58.741775] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.655 [2024-07-24 19:20:58.741818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.655 [2024-07-24 19:20:58.741827] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.655 [2024-07-24 19:20:58.741836] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.655 [2024-07-24 19:20:58.741842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.655 [2024-07-24 19:20:58.741863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:13.223 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:13.482 [2024-07-24 19:20:59.568667] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.482 [2024-07-24 19:20:59.584676] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.482 [2024-07-24 19:20:59.584860] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.482 [2024-07-24 19:20:59.612922] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:13.482 malloc0 00:20:13.482 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:13.482 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1564945 00:20:13.482 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:13.482 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1564945 /var/tmp/bdevperf.sock 00:20:13.482 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1564945 ']' 00:20:13.482 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.482 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:13.482 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.482 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:13.482 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.482 [2024-07-24 19:20:59.700220] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:13.482 [2024-07-24 19:20:59.700271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1564945 ] 00:20:13.741 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.741 [2024-07-24 19:20:59.767851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.741 [2024-07-24 19:20:59.840548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.309 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:14.309 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:14.309 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:14.568 [2024-07-24 19:21:00.655430] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.568 [2024-07-24 19:21:00.655514] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:14.568 TLSTESTn1 00:20:14.569 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.828 Running I/O for 10 seconds... 00:20:24.804 00:20:24.804 Latency(us) 00:20:24.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.804 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:24.804 Verification LBA range: start 0x0 length 0x2000 00:20:24.804 TLSTESTn1 : 10.02 5546.12 21.66 0.00 0.00 23034.36 4849.66 68786.59 00:20:24.804 =================================================================================================================== 00:20:24.804 Total : 5546.12 21.66 0.00 0.00 23034.36 4849.66 68786.59 00:20:24.804 0 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:24.804 nvmf_trace.0 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1564945 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1564945 ']' 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1564945 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:24.804 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1564945 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1564945' 00:20:25.063 killing process with pid 1564945 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1564945 00:20:25.063 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.063 00:20:25.063 Latency(us) 00:20:25.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.063 =================================================================================================================== 00:20:25.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.063 [2024-07-24 19:21:11.047207] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1564945 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.063 rmmod nvme_tcp 00:20:25.063 rmmod nvme_fabrics 00:20:25.063 rmmod nvme_keyring 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1564836 ']' 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1564836 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1564836 ']' 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1564836 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:25.063 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1564836 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1564836' 00:20:25.323 killing process with pid 1564836 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1564836 00:20:25.323 [2024-07-24 19:21:11.345542] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1564836 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.323 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.859 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:27.859 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:27.859 00:20:27.859 real 0m22.134s 00:20:27.859 user 0m21.966s 00:20:27.859 sys 0m11.100s 00:20:27.859 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:27.859 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:27.859 ************************************ 00:20:27.859 END TEST nvmf_fips 00:20:27.859 ************************************ 00:20:27.859 19:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:20:27.859 19:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:20:27.859 19:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:20:27.859 19:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:20:27.859 19:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.859 19:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.433 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:34.434 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:34.434 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:34.434 Found net devices under 0000:af:00.0: cvl_0_0 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:34.434 Found net devices under 0000:af:00.1: cvl_0_1 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:34.434 ************************************ 00:20:34.434 START TEST nvmf_perf_adq 00:20:34.434 ************************************ 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:34.434 * Looking for test storage... 00:20:34.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.434 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.435 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:41.009 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:41.009 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:41.009 Found net devices under 0000:af:00.0: cvl_0_0 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:41.009 Found net devices under 0000:af:00.1: cvl_0_1 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:41.009 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:41.980 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:44.518 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:49.795 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:49.796 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:49.796 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:49.796 Found net devices under 0000:af:00.0: cvl_0_0 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:49.796 Found net devices under 0000:af:00.1: cvl_0_1 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:49.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:20:49.796 00:20:49.796 --- 10.0.0.2 ping statistics --- 00:20:49.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.796 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:20:49.796 00:20:49.796 --- 10.0.0.1 ping statistics --- 00:20:49.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.796 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:49.796 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1575880 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1575880 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1575880 ']' 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:49.797 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.797 [2024-07-24 19:21:35.617435] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:49.797 [2024-07-24 19:21:35.617482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.797 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.797 [2024-07-24 19:21:35.691417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.797 [2024-07-24 19:21:35.760779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.797 [2024-07-24 19:21:35.760825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.797 [2024-07-24 19:21:35.760834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.797 [2024-07-24 19:21:35.760843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.797 [2024-07-24 19:21:35.760850] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.797 [2024-07-24 19:21:35.760899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.797 [2024-07-24 19:21:35.760996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.797 [2024-07-24 19:21:35.761060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.797 [2024-07-24 19:21:35.761062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.365 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.623 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.623 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:50.623 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.623 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.623 [2024-07-24 19:21:36.615193] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.623 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.623 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:50.623 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.623 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.623 Malloc1 00:20:50.623 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.623 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:50.623 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.624 [2024-07-24 19:21:36.661625] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1576164 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:50.624 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:50.624 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.528 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:52.528 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.528 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:52.528 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.528 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:52.528 "tick_rate": 2500000000, 00:20:52.528 "poll_groups": [ 00:20:52.528 { 00:20:52.528 "name": "nvmf_tgt_poll_group_000", 00:20:52.528 "admin_qpairs": 1, 00:20:52.528 "io_qpairs": 1, 00:20:52.528 "current_admin_qpairs": 1, 00:20:52.528 "current_io_qpairs": 1, 00:20:52.528 "pending_bdev_io": 0, 00:20:52.528 "completed_nvme_io": 21690, 00:20:52.528 "transports": [ 00:20:52.528 { 00:20:52.528 "trtype": "TCP" 00:20:52.528 } 00:20:52.528 ] 00:20:52.528 }, 00:20:52.528 { 00:20:52.528 "name": "nvmf_tgt_poll_group_001", 00:20:52.528 "admin_qpairs": 0, 00:20:52.528 "io_qpairs": 1, 00:20:52.528 "current_admin_qpairs": 0, 00:20:52.528 "current_io_qpairs": 1, 00:20:52.528 "pending_bdev_io": 0, 00:20:52.528 "completed_nvme_io": 21457, 00:20:52.528 "transports": [ 00:20:52.528 { 00:20:52.528 "trtype": "TCP" 00:20:52.528 } 00:20:52.528 ] 00:20:52.528 }, 00:20:52.528 { 00:20:52.528 "name": "nvmf_tgt_poll_group_002", 00:20:52.528 "admin_qpairs": 0, 00:20:52.528 "io_qpairs": 1, 00:20:52.528 "current_admin_qpairs": 0, 00:20:52.528 "current_io_qpairs": 1, 00:20:52.528 "pending_bdev_io": 0, 00:20:52.528 "completed_nvme_io": 20820, 00:20:52.528 "transports": [ 00:20:52.528 { 00:20:52.528 "trtype": "TCP" 00:20:52.528 } 00:20:52.528 ] 00:20:52.528 }, 00:20:52.528 { 00:20:52.528 "name": "nvmf_tgt_poll_group_003", 00:20:52.528 "admin_qpairs": 0, 00:20:52.528 "io_qpairs": 1, 00:20:52.528 "current_admin_qpairs": 0, 00:20:52.528 "current_io_qpairs": 1, 00:20:52.528 "pending_bdev_io": 0, 00:20:52.528 "completed_nvme_io": 21349, 00:20:52.528 "transports": [ 00:20:52.528 { 00:20:52.528 "trtype": "TCP" 00:20:52.528 } 00:20:52.528 ] 00:20:52.528 } 00:20:52.528 ] 00:20:52.528 }' 00:20:52.528 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:52.528 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:52.528 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:52.528 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:52.528 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1576164 00:21:00.646 Initializing NVMe Controllers 00:21:00.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:00.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:00.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:00.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:00.646 Initialization complete. Launching workers. 00:21:00.646 ======================================================== 00:21:00.646 Latency(us) 00:21:00.646 Device Information : IOPS MiB/s Average min max 00:21:00.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11090.40 43.32 5772.39 1134.39 10159.50 00:21:00.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11155.80 43.58 5736.51 1557.43 10530.45 00:21:00.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10903.40 42.59 5886.28 1150.85 45567.83 00:21:00.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11288.50 44.10 5671.13 1804.67 10548.23 00:21:00.646 ======================================================== 00:21:00.646 Total : 44438.09 173.59 5765.61 1134.39 45567.83 00:21:00.646 00:21:00.646 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:00.646 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:00.646 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:00.646 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:00.646 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:00.646 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:00.646 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:00.646 rmmod nvme_tcp 00:21:00.646 rmmod nvme_fabrics 00:21:00.646 rmmod nvme_keyring 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1575880 ']' 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1575880 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1575880 ']' 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1575880 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1575880 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1575880' 00:21:00.905 killing process with pid 1575880 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1575880 00:21:00.905 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1575880 00:21:01.164 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:01.164 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:01.164 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:01.164 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.164 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.164 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.164 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.164 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.069 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.069 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:03.069 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:04.449 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:06.427 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:11.710 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:11.710 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:11.710 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.710 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:11.711 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:11.711 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:11.711 Found net devices under 0000:af:00.0: cvl_0_0 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:11.711 Found net devices under 0000:af:00.1: cvl_0_1 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.711 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:11.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:21:11.712 00:21:11.712 --- 10.0.0.2 ping statistics --- 00:21:11.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.712 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:21:11.712 00:21:11.712 --- 10.0.0.1 ping statistics --- 00:21:11.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.712 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:11.712 net.core.busy_poll = 1 00:21:11.712 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:11.972 net.core.busy_read = 1 00:21:11.972 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:11.972 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:11.972 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:11.972 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:11.972 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:11.972 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:11.972 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.972 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:11.972 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.972 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1579974 00:21:11.972 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1579974 00:21:11.973 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:11.973 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1579974 ']' 00:21:11.973 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.973 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.973 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.973 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.973 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:12.233 [2024-07-24 19:21:58.259758] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:12.233 [2024-07-24 19:21:58.259815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.233 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.233 [2024-07-24 19:21:58.335471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.233 [2024-07-24 19:21:58.407328] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.233 [2024-07-24 19:21:58.407372] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.233 [2024-07-24 19:21:58.407382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.233 [2024-07-24 19:21:58.407391] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.233 [2024-07-24 19:21:58.407398] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.233 [2024-07-24 19:21:58.407444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.233 [2024-07-24 19:21:58.407463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.233 [2024-07-24 19:21:58.407552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.233 [2024-07-24 19:21:58.407554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.172 [2024-07-24 19:21:59.254381] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.172 Malloc1 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.172 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.173 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.173 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:13.173 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.173 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.173 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.173 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.173 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.173 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.173 [2024-07-24 19:21:59.308828] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.173 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.173 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1580256 00:21:13.173 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:13.173 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:13.173 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.150 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:15.150 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.150 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.150 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.150 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:15.150 "tick_rate": 2500000000, 00:21:15.150 "poll_groups": [ 00:21:15.150 { 00:21:15.150 "name": "nvmf_tgt_poll_group_000", 00:21:15.150 "admin_qpairs": 1, 00:21:15.150 "io_qpairs": 2, 00:21:15.150 "current_admin_qpairs": 1, 00:21:15.150 "current_io_qpairs": 2, 00:21:15.150 "pending_bdev_io": 0, 00:21:15.150 "completed_nvme_io": 29347, 00:21:15.150 "transports": [ 00:21:15.150 { 00:21:15.150 "trtype": "TCP" 00:21:15.150 } 00:21:15.150 ] 00:21:15.150 }, 00:21:15.150 { 00:21:15.150 "name": "nvmf_tgt_poll_group_001", 00:21:15.150 "admin_qpairs": 0, 00:21:15.150 "io_qpairs": 2, 00:21:15.150 "current_admin_qpairs": 0, 00:21:15.150 "current_io_qpairs": 2, 00:21:15.150 "pending_bdev_io": 0, 00:21:15.150 "completed_nvme_io": 30157, 00:21:15.150 "transports": [ 00:21:15.150 { 00:21:15.150 "trtype": "TCP" 00:21:15.150 } 00:21:15.150 ] 00:21:15.150 }, 00:21:15.150 { 00:21:15.150 "name": "nvmf_tgt_poll_group_002", 00:21:15.150 "admin_qpairs": 0, 00:21:15.150 "io_qpairs": 0, 00:21:15.150 "current_admin_qpairs": 0, 00:21:15.150 "current_io_qpairs": 0, 00:21:15.150 "pending_bdev_io": 0, 00:21:15.150 "completed_nvme_io": 0, 00:21:15.150 "transports": [ 00:21:15.150 { 00:21:15.150 "trtype": "TCP" 00:21:15.150 } 00:21:15.150 ] 00:21:15.150 }, 00:21:15.150 { 00:21:15.150 "name": "nvmf_tgt_poll_group_003", 00:21:15.150 "admin_qpairs": 0, 00:21:15.150 "io_qpairs": 0, 00:21:15.150 "current_admin_qpairs": 0, 00:21:15.150 "current_io_qpairs": 0, 00:21:15.150 "pending_bdev_io": 0, 00:21:15.150 "completed_nvme_io": 0, 00:21:15.150 "transports": [ 00:21:15.150 { 00:21:15.150 "trtype": "TCP" 00:21:15.150 } 00:21:15.150 ] 00:21:15.150 } 00:21:15.150 ] 00:21:15.150 }' 00:21:15.150 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:15.150 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:15.408 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:15.408 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:15.408 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1580256 00:21:23.533 Initializing NVMe Controllers 00:21:23.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:23.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:23.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:23.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:23.533 Initialization complete. Launching workers. 00:21:23.533 ======================================================== 00:21:23.533 Latency(us) 00:21:23.533 Device Information : IOPS MiB/s Average min max 00:21:23.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8291.60 32.39 7752.83 1462.55 52104.82 00:21:23.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7354.00 28.73 8731.26 1498.13 52971.72 00:21:23.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8298.50 32.42 7735.68 1452.85 52985.09 00:21:23.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7749.60 30.27 8260.61 1506.71 52460.34 00:21:23.533 ======================================================== 00:21:23.533 Total : 31693.70 123.80 8099.53 1452.85 52985.09 00:21:23.533 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:23.533 rmmod nvme_tcp 00:21:23.533 rmmod nvme_fabrics 00:21:23.533 rmmod nvme_keyring 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1579974 ']' 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1579974 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1579974 ']' 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1579974 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579974 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579974' 00:21:23.533 killing process with pid 1579974 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1579974 00:21:23.533 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1579974 00:21:23.793 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:23.793 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:23.793 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:23.793 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.793 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:23.793 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.793 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.793 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.698 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:25.698 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:25.698 00:21:25.698 real 0m51.736s 00:21:25.698 user 2m46.896s 00:21:25.698 sys 0m13.909s 00:21:25.698 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:25.698 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.698 ************************************ 00:21:25.698 END TEST nvmf_perf_adq 00:21:25.698 ************************************ 00:21:25.957 19:22:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:25.957 19:22:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:25.957 19:22:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:25.957 19:22:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:25.957 ************************************ 00:21:25.957 START TEST nvmf_shutdown 00:21:25.957 ************************************ 00:21:25.957 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:25.957 * Looking for test storage... 00:21:25.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:25.957 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:25.958 ************************************ 00:21:25.958 START TEST nvmf_shutdown_tc1 00:21:25.958 ************************************ 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:25.958 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:32.531 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:32.531 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:32.531 Found net devices under 0000:af:00.0: cvl_0_0 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:32.531 Found net devices under 0000:af:00.1: cvl_0_1 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:32.531 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.532 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.791 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.791 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.791 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:32.791 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.791 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.791 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:33.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:21:33.050 00:21:33.050 --- 10.0.0.2 ping statistics --- 00:21:33.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.050 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:21:33.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:21:33.051 00:21:33.051 --- 10.0.0.1 ping statistics --- 00:21:33.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.051 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1585693 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1585693 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1585693 ']' 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:33.051 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:33.051 [2024-07-24 19:22:19.150117] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:33.051 [2024-07-24 19:22:19.150170] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.051 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.051 [2024-07-24 19:22:19.225246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.310 [2024-07-24 19:22:19.301602] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.310 [2024-07-24 19:22:19.301637] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.310 [2024-07-24 19:22:19.301647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.311 [2024-07-24 19:22:19.301656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.311 [2024-07-24 19:22:19.301679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.311 [2024-07-24 19:22:19.301778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.311 [2024-07-24 19:22:19.301880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.311 [2024-07-24 19:22:19.301914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.311 [2024-07-24 19:22:19.301915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:33.879 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:33.879 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:33.879 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.879 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:33.879 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:33.879 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.879 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:33.879 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.879 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:33.879 [2024-07-24 19:22:20.015071] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.879 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.880 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:33.880 Malloc1 00:21:34.139 [2024-07-24 19:22:20.125823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.139 Malloc2 00:21:34.139 Malloc3 00:21:34.139 Malloc4 00:21:34.139 Malloc5 00:21:34.139 Malloc6 00:21:34.139 Malloc7 00:21:34.399 Malloc8 00:21:34.399 Malloc9 00:21:34.399 Malloc10 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1585951 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1585951 /var/tmp/bdevperf.sock 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1585951 ']' 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.399 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.399 { 00:21:34.399 "params": { 00:21:34.399 "name": "Nvme$subsystem", 00:21:34.399 "trtype": "$TEST_TRANSPORT", 00:21:34.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.399 "adrfam": "ipv4", 00:21:34.399 "trsvcid": "$NVMF_PORT", 00:21:34.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.400 "hdgst": ${hdgst:-false}, 00:21:34.400 "ddgst": ${ddgst:-false} 00:21:34.400 }, 00:21:34.400 "method": "bdev_nvme_attach_controller" 00:21:34.400 } 00:21:34.400 EOF 00:21:34.400 )") 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.400 { 00:21:34.400 "params": { 00:21:34.400 "name": "Nvme$subsystem", 00:21:34.400 "trtype": "$TEST_TRANSPORT", 00:21:34.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.400 "adrfam": "ipv4", 00:21:34.400 "trsvcid": "$NVMF_PORT", 00:21:34.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.400 "hdgst": ${hdgst:-false}, 00:21:34.400 "ddgst": ${ddgst:-false} 00:21:34.400 }, 00:21:34.400 "method": "bdev_nvme_attach_controller" 00:21:34.400 } 00:21:34.400 EOF 00:21:34.400 )") 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.400 { 00:21:34.400 "params": { 00:21:34.400 "name": "Nvme$subsystem", 00:21:34.400 "trtype": "$TEST_TRANSPORT", 00:21:34.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.400 "adrfam": "ipv4", 00:21:34.400 "trsvcid": "$NVMF_PORT", 00:21:34.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.400 "hdgst": ${hdgst:-false}, 00:21:34.400 "ddgst": ${ddgst:-false} 00:21:34.400 }, 00:21:34.400 "method": "bdev_nvme_attach_controller" 00:21:34.400 } 00:21:34.400 EOF 00:21:34.400 )") 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.400 { 00:21:34.400 "params": { 00:21:34.400 "name": "Nvme$subsystem", 00:21:34.400 "trtype": "$TEST_TRANSPORT", 00:21:34.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.400 "adrfam": "ipv4", 00:21:34.400 "trsvcid": "$NVMF_PORT", 00:21:34.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.400 "hdgst": ${hdgst:-false}, 00:21:34.400 "ddgst": ${ddgst:-false} 00:21:34.400 }, 00:21:34.400 "method": "bdev_nvme_attach_controller" 00:21:34.400 } 00:21:34.400 EOF 00:21:34.400 )") 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.400 { 00:21:34.400 "params": { 00:21:34.400 "name": "Nvme$subsystem", 00:21:34.400 "trtype": "$TEST_TRANSPORT", 00:21:34.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.400 "adrfam": "ipv4", 00:21:34.400 "trsvcid": "$NVMF_PORT", 00:21:34.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.400 "hdgst": ${hdgst:-false}, 00:21:34.400 "ddgst": ${ddgst:-false} 00:21:34.400 }, 00:21:34.400 "method": "bdev_nvme_attach_controller" 00:21:34.400 } 00:21:34.400 EOF 00:21:34.400 )") 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.400 { 00:21:34.400 "params": { 00:21:34.400 "name": "Nvme$subsystem", 00:21:34.400 "trtype": "$TEST_TRANSPORT", 00:21:34.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.400 "adrfam": "ipv4", 00:21:34.400 "trsvcid": "$NVMF_PORT", 00:21:34.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.400 "hdgst": ${hdgst:-false}, 00:21:34.400 "ddgst": ${ddgst:-false} 00:21:34.400 }, 00:21:34.400 "method": "bdev_nvme_attach_controller" 00:21:34.400 } 00:21:34.400 EOF 00:21:34.400 )") 00:21:34.400 [2024-07-24 19:22:20.612109] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:34.400 [2024-07-24 19:22:20.612159] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.400 { 00:21:34.400 "params": { 00:21:34.400 "name": "Nvme$subsystem", 00:21:34.400 "trtype": "$TEST_TRANSPORT", 00:21:34.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.400 "adrfam": "ipv4", 00:21:34.400 "trsvcid": "$NVMF_PORT", 00:21:34.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.400 "hdgst": ${hdgst:-false}, 00:21:34.400 "ddgst": ${ddgst:-false} 00:21:34.400 }, 00:21:34.400 "method": "bdev_nvme_attach_controller" 00:21:34.400 } 00:21:34.400 EOF 00:21:34.400 )") 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.400 { 00:21:34.400 "params": { 00:21:34.400 "name": "Nvme$subsystem", 00:21:34.400 "trtype": "$TEST_TRANSPORT", 00:21:34.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.400 "adrfam": "ipv4", 00:21:34.400 "trsvcid": "$NVMF_PORT", 00:21:34.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.400 "hdgst": ${hdgst:-false}, 00:21:34.400 "ddgst": ${ddgst:-false} 00:21:34.400 }, 00:21:34.400 "method": "bdev_nvme_attach_controller" 00:21:34.400 } 00:21:34.400 EOF 00:21:34.400 )") 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.400 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.400 { 00:21:34.400 "params": { 00:21:34.400 "name": "Nvme$subsystem", 00:21:34.400 "trtype": "$TEST_TRANSPORT", 00:21:34.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.400 "adrfam": "ipv4", 00:21:34.400 "trsvcid": "$NVMF_PORT", 00:21:34.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.400 "hdgst": ${hdgst:-false}, 00:21:34.400 "ddgst": ${ddgst:-false} 00:21:34.400 }, 00:21:34.400 "method": "bdev_nvme_attach_controller" 00:21:34.400 } 00:21:34.401 EOF 00:21:34.401 )") 00:21:34.401 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:34.661 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.661 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.661 { 00:21:34.661 "params": { 00:21:34.661 "name": "Nvme$subsystem", 00:21:34.661 "trtype": "$TEST_TRANSPORT", 00:21:34.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.661 "adrfam": "ipv4", 00:21:34.661 "trsvcid": "$NVMF_PORT", 00:21:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.661 "hdgst": ${hdgst:-false}, 00:21:34.661 "ddgst": ${ddgst:-false} 00:21:34.661 }, 00:21:34.661 "method": "bdev_nvme_attach_controller" 00:21:34.661 } 00:21:34.661 EOF 00:21:34.661 )") 00:21:34.661 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:34.661 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.661 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:34.661 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:34.661 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:34.661 "params": { 00:21:34.661 "name": "Nvme1", 00:21:34.661 "trtype": "tcp", 00:21:34.661 "traddr": "10.0.0.2", 00:21:34.661 "adrfam": "ipv4", 00:21:34.661 "trsvcid": "4420", 00:21:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.661 "hdgst": false, 00:21:34.661 "ddgst": false 00:21:34.661 }, 00:21:34.661 "method": "bdev_nvme_attach_controller" 00:21:34.661 },{ 00:21:34.661 "params": { 00:21:34.661 "name": "Nvme2", 00:21:34.661 "trtype": "tcp", 00:21:34.661 "traddr": "10.0.0.2", 00:21:34.661 "adrfam": "ipv4", 00:21:34.661 "trsvcid": "4420", 00:21:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:34.661 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:34.661 "hdgst": false, 00:21:34.661 "ddgst": false 00:21:34.661 }, 00:21:34.661 "method": "bdev_nvme_attach_controller" 00:21:34.661 },{ 00:21:34.661 "params": { 00:21:34.661 "name": "Nvme3", 00:21:34.661 "trtype": "tcp", 00:21:34.661 "traddr": "10.0.0.2", 00:21:34.661 "adrfam": "ipv4", 00:21:34.661 "trsvcid": "4420", 00:21:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:34.661 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:34.661 "hdgst": false, 00:21:34.661 "ddgst": false 00:21:34.661 }, 00:21:34.661 "method": "bdev_nvme_attach_controller" 00:21:34.661 },{ 00:21:34.661 "params": { 00:21:34.661 "name": "Nvme4", 00:21:34.661 "trtype": "tcp", 00:21:34.661 "traddr": "10.0.0.2", 00:21:34.661 "adrfam": "ipv4", 00:21:34.661 "trsvcid": "4420", 00:21:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:34.661 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:34.661 "hdgst": false, 00:21:34.661 "ddgst": false 00:21:34.661 }, 00:21:34.661 "method": "bdev_nvme_attach_controller" 00:21:34.661 },{ 00:21:34.661 "params": { 00:21:34.661 "name": "Nvme5", 00:21:34.661 "trtype": "tcp", 00:21:34.661 "traddr": "10.0.0.2", 00:21:34.661 "adrfam": "ipv4", 00:21:34.661 "trsvcid": "4420", 00:21:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:34.661 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:34.661 "hdgst": false, 00:21:34.661 "ddgst": false 00:21:34.661 }, 00:21:34.661 "method": "bdev_nvme_attach_controller" 00:21:34.661 },{ 00:21:34.661 "params": { 00:21:34.661 "name": "Nvme6", 00:21:34.661 "trtype": "tcp", 00:21:34.661 "traddr": "10.0.0.2", 00:21:34.661 "adrfam": "ipv4", 00:21:34.661 "trsvcid": "4420", 00:21:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:34.661 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:34.661 "hdgst": false, 00:21:34.661 "ddgst": false 00:21:34.661 }, 00:21:34.661 "method": "bdev_nvme_attach_controller" 00:21:34.661 },{ 00:21:34.661 "params": { 00:21:34.661 "name": "Nvme7", 00:21:34.661 "trtype": "tcp", 00:21:34.661 "traddr": "10.0.0.2", 00:21:34.661 "adrfam": "ipv4", 00:21:34.661 "trsvcid": "4420", 00:21:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:34.661 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:34.661 "hdgst": false, 00:21:34.661 "ddgst": false 00:21:34.661 }, 00:21:34.661 "method": "bdev_nvme_attach_controller" 00:21:34.661 },{ 00:21:34.661 "params": { 00:21:34.661 "name": "Nvme8", 00:21:34.661 "trtype": "tcp", 00:21:34.661 "traddr": "10.0.0.2", 00:21:34.661 "adrfam": "ipv4", 00:21:34.661 "trsvcid": "4420", 00:21:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:34.661 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:34.661 "hdgst": false, 00:21:34.661 "ddgst": false 00:21:34.661 }, 00:21:34.661 "method": "bdev_nvme_attach_controller" 00:21:34.661 },{ 00:21:34.661 "params": { 00:21:34.661 "name": "Nvme9", 00:21:34.661 "trtype": "tcp", 00:21:34.661 "traddr": "10.0.0.2", 00:21:34.661 "adrfam": "ipv4", 00:21:34.661 "trsvcid": "4420", 00:21:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:34.661 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:34.661 "hdgst": false, 00:21:34.661 "ddgst": false 00:21:34.661 }, 00:21:34.661 "method": "bdev_nvme_attach_controller" 00:21:34.661 },{ 00:21:34.661 "params": { 00:21:34.661 "name": "Nvme10", 00:21:34.661 "trtype": "tcp", 00:21:34.661 "traddr": "10.0.0.2", 00:21:34.661 "adrfam": "ipv4", 00:21:34.661 "trsvcid": "4420", 00:21:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:34.661 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:34.661 "hdgst": false, 00:21:34.661 "ddgst": false 00:21:34.661 }, 00:21:34.661 "method": "bdev_nvme_attach_controller" 00:21:34.661 }' 00:21:34.661 [2024-07-24 19:22:20.685449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.661 [2024-07-24 19:22:20.753453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.044 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:36.044 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:36.044 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:36.044 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.044 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:36.044 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.044 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1585951 00:21:36.044 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:36.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1585951 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:36.045 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:36.986 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1585693 00:21:36.986 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:36.986 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:36.986 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:36.986 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:36.986 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.986 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.986 { 00:21:36.986 "params": { 00:21:36.986 "name": "Nvme$subsystem", 00:21:36.986 "trtype": "$TEST_TRANSPORT", 00:21:36.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.986 "adrfam": "ipv4", 00:21:36.986 "trsvcid": "$NVMF_PORT", 00:21:36.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.986 "hdgst": ${hdgst:-false}, 00:21:36.986 "ddgst": ${ddgst:-false} 00:21:36.986 }, 00:21:36.986 "method": "bdev_nvme_attach_controller" 00:21:36.986 } 00:21:36.986 EOF 00:21:36.986 )") 00:21:36.986 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.986 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.986 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.986 { 00:21:36.986 "params": { 00:21:36.986 "name": "Nvme$subsystem", 00:21:36.986 "trtype": "$TEST_TRANSPORT", 00:21:36.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.986 "adrfam": "ipv4", 00:21:36.986 "trsvcid": "$NVMF_PORT", 00:21:36.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.986 "hdgst": ${hdgst:-false}, 00:21:36.986 "ddgst": ${ddgst:-false} 00:21:36.986 }, 00:21:36.986 "method": "bdev_nvme_attach_controller" 00:21:36.986 } 00:21:36.986 EOF 00:21:36.986 )") 00:21:36.986 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.987 { 00:21:36.987 "params": { 00:21:36.987 "name": "Nvme$subsystem", 00:21:36.987 "trtype": "$TEST_TRANSPORT", 00:21:36.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.987 "adrfam": "ipv4", 00:21:36.987 "trsvcid": "$NVMF_PORT", 00:21:36.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.987 "hdgst": ${hdgst:-false}, 00:21:36.987 "ddgst": ${ddgst:-false} 00:21:36.987 }, 00:21:36.987 "method": "bdev_nvme_attach_controller" 00:21:36.987 } 00:21:36.987 EOF 00:21:36.987 )") 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.987 { 00:21:36.987 "params": { 00:21:36.987 "name": "Nvme$subsystem", 00:21:36.987 "trtype": "$TEST_TRANSPORT", 00:21:36.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.987 "adrfam": "ipv4", 00:21:36.987 "trsvcid": "$NVMF_PORT", 00:21:36.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.987 "hdgst": ${hdgst:-false}, 00:21:36.987 "ddgst": ${ddgst:-false} 00:21:36.987 }, 00:21:36.987 "method": "bdev_nvme_attach_controller" 00:21:36.987 } 00:21:36.987 EOF 00:21:36.987 )") 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.987 { 00:21:36.987 "params": { 00:21:36.987 "name": "Nvme$subsystem", 00:21:36.987 "trtype": "$TEST_TRANSPORT", 00:21:36.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.987 "adrfam": "ipv4", 00:21:36.987 "trsvcid": "$NVMF_PORT", 00:21:36.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.987 "hdgst": ${hdgst:-false}, 00:21:36.987 "ddgst": ${ddgst:-false} 00:21:36.987 }, 00:21:36.987 "method": "bdev_nvme_attach_controller" 00:21:36.987 } 00:21:36.987 EOF 00:21:36.987 )") 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.987 { 00:21:36.987 "params": { 00:21:36.987 "name": "Nvme$subsystem", 00:21:36.987 "trtype": "$TEST_TRANSPORT", 00:21:36.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.987 "adrfam": "ipv4", 00:21:36.987 "trsvcid": "$NVMF_PORT", 00:21:36.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.987 "hdgst": ${hdgst:-false}, 00:21:36.987 "ddgst": ${ddgst:-false} 00:21:36.987 }, 00:21:36.987 "method": "bdev_nvme_attach_controller" 00:21:36.987 } 00:21:36.987 EOF 00:21:36.987 )") 00:21:36.987 [2024-07-24 19:22:23.158661] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:36.987 [2024-07-24 19:22:23.158713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586490 ] 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.987 { 00:21:36.987 "params": { 00:21:36.987 "name": "Nvme$subsystem", 00:21:36.987 "trtype": "$TEST_TRANSPORT", 00:21:36.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.987 "adrfam": "ipv4", 00:21:36.987 "trsvcid": "$NVMF_PORT", 00:21:36.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.987 "hdgst": ${hdgst:-false}, 00:21:36.987 "ddgst": ${ddgst:-false} 00:21:36.987 }, 00:21:36.987 "method": "bdev_nvme_attach_controller" 00:21:36.987 } 00:21:36.987 EOF 00:21:36.987 )") 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.987 { 00:21:36.987 "params": { 00:21:36.987 "name": "Nvme$subsystem", 00:21:36.987 "trtype": "$TEST_TRANSPORT", 00:21:36.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.987 "adrfam": "ipv4", 00:21:36.987 "trsvcid": "$NVMF_PORT", 00:21:36.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.987 "hdgst": ${hdgst:-false}, 00:21:36.987 "ddgst": ${ddgst:-false} 00:21:36.987 }, 00:21:36.987 "method": "bdev_nvme_attach_controller" 00:21:36.987 } 00:21:36.987 EOF 00:21:36.987 )") 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.987 { 00:21:36.987 "params": { 00:21:36.987 "name": "Nvme$subsystem", 00:21:36.987 "trtype": "$TEST_TRANSPORT", 00:21:36.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.987 "adrfam": "ipv4", 00:21:36.987 "trsvcid": "$NVMF_PORT", 00:21:36.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.987 "hdgst": ${hdgst:-false}, 00:21:36.987 "ddgst": ${ddgst:-false} 00:21:36.987 }, 00:21:36.987 "method": "bdev_nvme_attach_controller" 00:21:36.987 } 00:21:36.987 EOF 00:21:36.987 )") 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.987 { 00:21:36.987 "params": { 00:21:36.987 "name": "Nvme$subsystem", 00:21:36.987 "trtype": "$TEST_TRANSPORT", 00:21:36.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.987 "adrfam": "ipv4", 00:21:36.987 "trsvcid": "$NVMF_PORT", 00:21:36.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.987 "hdgst": ${hdgst:-false}, 00:21:36.987 "ddgst": ${ddgst:-false} 00:21:36.987 }, 00:21:36.987 "method": "bdev_nvme_attach_controller" 00:21:36.987 } 00:21:36.987 EOF 00:21:36.987 )") 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:36.987 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:36.987 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:36.987 "params": { 00:21:36.987 "name": "Nvme1", 00:21:36.987 "trtype": "tcp", 00:21:36.987 "traddr": "10.0.0.2", 00:21:36.987 "adrfam": "ipv4", 00:21:36.987 "trsvcid": "4420", 00:21:36.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.987 "hdgst": false, 00:21:36.987 "ddgst": false 00:21:36.987 }, 00:21:36.987 "method": "bdev_nvme_attach_controller" 00:21:36.987 },{ 00:21:36.987 "params": { 00:21:36.987 "name": "Nvme2", 00:21:36.987 "trtype": "tcp", 00:21:36.987 "traddr": "10.0.0.2", 00:21:36.987 "adrfam": "ipv4", 00:21:36.987 "trsvcid": "4420", 00:21:36.987 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:36.987 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:36.987 "hdgst": false, 00:21:36.987 "ddgst": false 00:21:36.987 }, 00:21:36.987 "method": "bdev_nvme_attach_controller" 00:21:36.987 },{ 00:21:36.987 "params": { 00:21:36.987 "name": "Nvme3", 00:21:36.987 "trtype": "tcp", 00:21:36.987 "traddr": "10.0.0.2", 00:21:36.987 "adrfam": "ipv4", 00:21:36.987 "trsvcid": "4420", 00:21:36.987 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:36.987 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:36.987 "hdgst": false, 00:21:36.987 "ddgst": false 00:21:36.987 }, 00:21:36.988 "method": "bdev_nvme_attach_controller" 00:21:36.988 },{ 00:21:36.988 "params": { 00:21:36.988 "name": "Nvme4", 00:21:36.988 "trtype": "tcp", 00:21:36.988 "traddr": "10.0.0.2", 00:21:36.988 "adrfam": "ipv4", 00:21:36.988 "trsvcid": "4420", 00:21:36.988 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:36.988 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:36.988 "hdgst": false, 00:21:36.988 "ddgst": false 00:21:36.988 }, 00:21:36.988 "method": "bdev_nvme_attach_controller" 00:21:36.988 },{ 00:21:36.988 "params": { 00:21:36.988 "name": "Nvme5", 00:21:36.988 "trtype": "tcp", 00:21:36.988 "traddr": "10.0.0.2", 00:21:36.988 "adrfam": "ipv4", 00:21:36.988 "trsvcid": "4420", 00:21:36.988 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:36.988 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:36.988 "hdgst": false, 00:21:36.988 "ddgst": false 00:21:36.988 }, 00:21:36.988 "method": "bdev_nvme_attach_controller" 00:21:36.988 },{ 00:21:36.988 "params": { 00:21:36.988 "name": "Nvme6", 00:21:36.988 "trtype": "tcp", 00:21:36.988 "traddr": "10.0.0.2", 00:21:36.988 "adrfam": "ipv4", 00:21:36.988 "trsvcid": "4420", 00:21:36.988 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:36.988 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:36.988 "hdgst": false, 00:21:36.988 "ddgst": false 00:21:36.988 }, 00:21:36.988 "method": "bdev_nvme_attach_controller" 00:21:36.988 },{ 00:21:36.988 "params": { 00:21:36.988 "name": "Nvme7", 00:21:36.988 "trtype": "tcp", 00:21:36.988 "traddr": "10.0.0.2", 00:21:36.988 "adrfam": "ipv4", 00:21:36.988 "trsvcid": "4420", 00:21:36.988 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:36.988 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:36.988 "hdgst": false, 00:21:36.988 "ddgst": false 00:21:36.988 }, 00:21:36.988 "method": "bdev_nvme_attach_controller" 00:21:36.988 },{ 00:21:36.988 "params": { 00:21:36.988 "name": "Nvme8", 00:21:36.988 "trtype": "tcp", 00:21:36.988 "traddr": "10.0.0.2", 00:21:36.988 "adrfam": "ipv4", 00:21:36.988 "trsvcid": "4420", 00:21:36.988 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:36.988 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:36.988 "hdgst": false, 00:21:36.988 "ddgst": false 00:21:36.988 }, 00:21:36.988 "method": "bdev_nvme_attach_controller" 00:21:36.988 },{ 00:21:36.988 "params": { 00:21:36.988 "name": "Nvme9", 00:21:36.988 "trtype": "tcp", 00:21:36.988 "traddr": "10.0.0.2", 00:21:36.988 "adrfam": "ipv4", 00:21:36.988 "trsvcid": "4420", 00:21:36.988 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:36.988 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:36.988 "hdgst": false, 00:21:36.988 "ddgst": false 00:21:36.988 }, 00:21:36.988 "method": "bdev_nvme_attach_controller" 00:21:36.988 },{ 00:21:36.988 "params": { 00:21:36.988 "name": "Nvme10", 00:21:36.988 "trtype": "tcp", 00:21:36.988 "traddr": "10.0.0.2", 00:21:36.988 "adrfam": "ipv4", 00:21:36.988 "trsvcid": "4420", 00:21:36.988 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:36.988 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:36.988 "hdgst": false, 00:21:36.988 "ddgst": false 00:21:36.988 }, 00:21:36.988 "method": "bdev_nvme_attach_controller" 00:21:36.988 }' 00:21:37.248 [2024-07-24 19:22:23.231891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.248 [2024-07-24 19:22:23.302278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.627 Running I/O for 1 seconds... 00:21:40.008 00:21:40.008 Latency(us) 00:21:40.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.008 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.008 Verification LBA range: start 0x0 length 0x400 00:21:40.008 Nvme1n1 : 1.11 289.21 18.08 0.00 0.00 219358.17 16252.93 204682.04 00:21:40.008 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.008 Verification LBA range: start 0x0 length 0x400 00:21:40.008 Nvme2n1 : 1.10 290.06 18.13 0.00 0.00 215690.94 18035.51 205520.90 00:21:40.008 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.008 Verification LBA range: start 0x0 length 0x400 00:21:40.008 Nvme3n1 : 1.09 294.73 18.42 0.00 0.00 209363.44 17511.22 205520.90 00:21:40.008 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.008 Verification LBA range: start 0x0 length 0x400 00:21:40.008 Nvme4n1 : 1.10 290.91 18.18 0.00 0.00 209084.91 17930.65 205520.90 00:21:40.008 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.008 Verification LBA range: start 0x0 length 0x400 00:21:40.008 Nvme5n1 : 1.14 281.69 17.61 0.00 0.00 213450.42 16672.36 226492.42 00:21:40.008 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.008 Verification LBA range: start 0x0 length 0x400 00:21:40.008 Nvme6n1 : 1.11 287.74 17.98 0.00 0.00 205639.35 17196.65 191260.26 00:21:40.008 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.008 Verification LBA range: start 0x0 length 0x400 00:21:40.008 Nvme7n1 : 1.09 293.20 18.33 0.00 0.00 198537.38 17720.93 204682.04 00:21:40.008 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.008 Verification LBA range: start 0x0 length 0x400 00:21:40.008 Nvme8n1 : 1.11 287.28 17.95 0.00 0.00 200035.70 17930.65 204682.04 00:21:40.008 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.008 Verification LBA range: start 0x0 length 0x400 00:21:40.008 Nvme9n1 : 1.14 279.49 17.47 0.00 0.00 203336.42 18035.51 231525.58 00:21:40.008 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:40.008 Verification LBA range: start 0x0 length 0x400 00:21:40.008 Nvme10n1 : 1.16 330.19 20.64 0.00 0.00 170017.38 8231.32 213070.64 00:21:40.008 =================================================================================================================== 00:21:40.008 Total : 2924.50 182.78 0.00 0.00 203776.23 8231.32 231525.58 00:21:40.008 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:40.008 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:40.008 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:40.008 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.009 rmmod nvme_tcp 00:21:40.009 rmmod nvme_fabrics 00:21:40.009 rmmod nvme_keyring 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1585693 ']' 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1585693 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1585693 ']' 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1585693 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1585693 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1585693' 00:21:40.009 killing process with pid 1585693 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1585693 00:21:40.009 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1585693 00:21:40.579 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.579 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.579 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.579 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.579 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.579 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.579 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.579 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.490 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.490 00:21:42.490 real 0m16.461s 00:21:42.490 user 0m34.410s 00:21:42.490 sys 0m6.985s 00:21:42.490 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:42.490 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:42.490 ************************************ 00:21:42.490 END TEST nvmf_shutdown_tc1 00:21:42.490 ************************************ 00:21:42.490 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:42.490 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:42.490 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:42.490 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:42.490 ************************************ 00:21:42.490 START TEST nvmf_shutdown_tc2 00:21:42.490 ************************************ 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:42.751 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:42.751 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.751 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:42.752 Found net devices under 0000:af:00.0: cvl_0_0 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:42.752 Found net devices under 0000:af:00.1: cvl_0_1 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:42.752 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:43.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:21:43.012 00:21:43.012 --- 10.0.0.2 ping statistics --- 00:21:43.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.012 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:21:43.012 00:21:43.012 --- 10.0.0.1 ping statistics --- 00:21:43.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.012 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1587625 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1587625 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1587625 ']' 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.012 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:43.012 [2024-07-24 19:22:29.170484] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:43.012 [2024-07-24 19:22:29.170533] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.012 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.012 [2024-07-24 19:22:29.243380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:43.272 [2024-07-24 19:22:29.312284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.272 [2024-07-24 19:22:29.312324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.272 [2024-07-24 19:22:29.312334] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.272 [2024-07-24 19:22:29.312343] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.272 [2024-07-24 19:22:29.312350] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.272 [2024-07-24 19:22:29.312457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.272 [2024-07-24 19:22:29.312559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.272 [2024-07-24 19:22:29.312669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.272 [2024-07-24 19:22:29.312671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:43.842 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.842 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:43.842 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:43.842 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:43.842 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:43.842 [2024-07-24 19:22:30.028111] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:43.842 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:44.103 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:44.103 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:44.103 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:44.103 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:44.103 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:44.103 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:44.103 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:44.103 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:44.103 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.103 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.103 Malloc1 00:21:44.103 [2024-07-24 19:22:30.138779] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.103 Malloc2 00:21:44.103 Malloc3 00:21:44.103 Malloc4 00:21:44.103 Malloc5 00:21:44.103 Malloc6 00:21:44.362 Malloc7 00:21:44.362 Malloc8 00:21:44.362 Malloc9 00:21:44.362 Malloc10 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1587875 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1587875 /var/tmp/bdevperf.sock 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1587875 ']' 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:44.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:44.362 { 00:21:44.362 "params": { 00:21:44.362 "name": "Nvme$subsystem", 00:21:44.362 "trtype": "$TEST_TRANSPORT", 00:21:44.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.362 "adrfam": "ipv4", 00:21:44.362 "trsvcid": "$NVMF_PORT", 00:21:44.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.362 "hdgst": ${hdgst:-false}, 00:21:44.362 "ddgst": ${ddgst:-false} 00:21:44.362 }, 00:21:44.362 "method": "bdev_nvme_attach_controller" 00:21:44.362 } 00:21:44.362 EOF 00:21:44.362 )") 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:44.362 { 00:21:44.362 "params": { 00:21:44.362 "name": "Nvme$subsystem", 00:21:44.362 "trtype": "$TEST_TRANSPORT", 00:21:44.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.362 "adrfam": "ipv4", 00:21:44.362 "trsvcid": "$NVMF_PORT", 00:21:44.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.362 "hdgst": ${hdgst:-false}, 00:21:44.362 "ddgst": ${ddgst:-false} 00:21:44.362 }, 00:21:44.362 "method": "bdev_nvme_attach_controller" 00:21:44.362 } 00:21:44.362 EOF 00:21:44.362 )") 00:21:44.362 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:44.622 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:44.622 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:44.622 { 00:21:44.622 "params": { 00:21:44.622 "name": "Nvme$subsystem", 00:21:44.622 "trtype": "$TEST_TRANSPORT", 00:21:44.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.622 "adrfam": "ipv4", 00:21:44.622 "trsvcid": "$NVMF_PORT", 00:21:44.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.622 "hdgst": ${hdgst:-false}, 00:21:44.622 "ddgst": ${ddgst:-false} 00:21:44.622 }, 00:21:44.622 "method": "bdev_nvme_attach_controller" 00:21:44.622 } 00:21:44.622 EOF 00:21:44.622 )") 00:21:44.622 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:44.622 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:44.622 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:44.622 { 00:21:44.622 "params": { 00:21:44.622 "name": "Nvme$subsystem", 00:21:44.622 "trtype": "$TEST_TRANSPORT", 00:21:44.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.622 "adrfam": "ipv4", 00:21:44.622 "trsvcid": "$NVMF_PORT", 00:21:44.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.622 "hdgst": ${hdgst:-false}, 00:21:44.622 "ddgst": ${ddgst:-false} 00:21:44.622 }, 00:21:44.622 "method": "bdev_nvme_attach_controller" 00:21:44.622 } 00:21:44.622 EOF 00:21:44.622 )") 00:21:44.622 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:44.622 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:44.623 { 00:21:44.623 "params": { 00:21:44.623 "name": "Nvme$subsystem", 00:21:44.623 "trtype": "$TEST_TRANSPORT", 00:21:44.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.623 "adrfam": "ipv4", 00:21:44.623 "trsvcid": "$NVMF_PORT", 00:21:44.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.623 "hdgst": ${hdgst:-false}, 00:21:44.623 "ddgst": ${ddgst:-false} 00:21:44.623 }, 00:21:44.623 "method": "bdev_nvme_attach_controller" 00:21:44.623 } 00:21:44.623 EOF 00:21:44.623 )") 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:44.623 { 00:21:44.623 "params": { 00:21:44.623 "name": "Nvme$subsystem", 00:21:44.623 "trtype": "$TEST_TRANSPORT", 00:21:44.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.623 "adrfam": "ipv4", 00:21:44.623 "trsvcid": "$NVMF_PORT", 00:21:44.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.623 "hdgst": ${hdgst:-false}, 00:21:44.623 "ddgst": ${ddgst:-false} 00:21:44.623 }, 00:21:44.623 "method": "bdev_nvme_attach_controller" 00:21:44.623 } 00:21:44.623 EOF 00:21:44.623 )") 00:21:44.623 [2024-07-24 19:22:30.629480] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:44.623 [2024-07-24 19:22:30.629534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587875 ] 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:44.623 { 00:21:44.623 "params": { 00:21:44.623 "name": "Nvme$subsystem", 00:21:44.623 "trtype": "$TEST_TRANSPORT", 00:21:44.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.623 "adrfam": "ipv4", 00:21:44.623 "trsvcid": "$NVMF_PORT", 00:21:44.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.623 "hdgst": ${hdgst:-false}, 00:21:44.623 "ddgst": ${ddgst:-false} 00:21:44.623 }, 00:21:44.623 "method": "bdev_nvme_attach_controller" 00:21:44.623 } 00:21:44.623 EOF 00:21:44.623 )") 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:44.623 { 00:21:44.623 "params": { 00:21:44.623 "name": "Nvme$subsystem", 00:21:44.623 "trtype": "$TEST_TRANSPORT", 00:21:44.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.623 "adrfam": "ipv4", 00:21:44.623 "trsvcid": "$NVMF_PORT", 00:21:44.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.623 "hdgst": ${hdgst:-false}, 00:21:44.623 "ddgst": ${ddgst:-false} 00:21:44.623 }, 00:21:44.623 "method": "bdev_nvme_attach_controller" 00:21:44.623 } 00:21:44.623 EOF 00:21:44.623 )") 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:44.623 { 00:21:44.623 "params": { 00:21:44.623 "name": "Nvme$subsystem", 00:21:44.623 "trtype": "$TEST_TRANSPORT", 00:21:44.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.623 "adrfam": "ipv4", 00:21:44.623 "trsvcid": "$NVMF_PORT", 00:21:44.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.623 "hdgst": ${hdgst:-false}, 00:21:44.623 "ddgst": ${ddgst:-false} 00:21:44.623 }, 00:21:44.623 "method": "bdev_nvme_attach_controller" 00:21:44.623 } 00:21:44.623 EOF 00:21:44.623 )") 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:44.623 { 00:21:44.623 "params": { 00:21:44.623 "name": "Nvme$subsystem", 00:21:44.623 "trtype": "$TEST_TRANSPORT", 00:21:44.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.623 "adrfam": "ipv4", 00:21:44.623 "trsvcid": "$NVMF_PORT", 00:21:44.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.623 "hdgst": ${hdgst:-false}, 00:21:44.623 "ddgst": ${ddgst:-false} 00:21:44.623 }, 00:21:44.623 "method": "bdev_nvme_attach_controller" 00:21:44.623 } 00:21:44.623 EOF 00:21:44.623 )") 00:21:44.623 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:44.623 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:44.623 "params": { 00:21:44.623 "name": "Nvme1", 00:21:44.623 "trtype": "tcp", 00:21:44.623 "traddr": "10.0.0.2", 00:21:44.623 "adrfam": "ipv4", 00:21:44.623 "trsvcid": "4420", 00:21:44.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.623 "hdgst": false, 00:21:44.623 "ddgst": false 00:21:44.623 }, 00:21:44.624 "method": "bdev_nvme_attach_controller" 00:21:44.624 },{ 00:21:44.624 "params": { 00:21:44.624 "name": "Nvme2", 00:21:44.624 "trtype": "tcp", 00:21:44.624 "traddr": "10.0.0.2", 00:21:44.624 "adrfam": "ipv4", 00:21:44.624 "trsvcid": "4420", 00:21:44.624 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:44.624 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:44.624 "hdgst": false, 00:21:44.624 "ddgst": false 00:21:44.624 }, 00:21:44.624 "method": "bdev_nvme_attach_controller" 00:21:44.624 },{ 00:21:44.624 "params": { 00:21:44.624 "name": "Nvme3", 00:21:44.624 "trtype": "tcp", 00:21:44.624 "traddr": "10.0.0.2", 00:21:44.624 "adrfam": "ipv4", 00:21:44.624 "trsvcid": "4420", 00:21:44.624 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:44.624 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:44.624 "hdgst": false, 00:21:44.624 "ddgst": false 00:21:44.624 }, 00:21:44.624 "method": "bdev_nvme_attach_controller" 00:21:44.624 },{ 00:21:44.624 "params": { 00:21:44.624 "name": "Nvme4", 00:21:44.624 "trtype": "tcp", 00:21:44.624 "traddr": "10.0.0.2", 00:21:44.624 "adrfam": "ipv4", 00:21:44.624 "trsvcid": "4420", 00:21:44.624 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:44.624 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:44.624 "hdgst": false, 00:21:44.624 "ddgst": false 00:21:44.624 }, 00:21:44.624 "method": "bdev_nvme_attach_controller" 00:21:44.624 },{ 00:21:44.624 "params": { 00:21:44.624 "name": "Nvme5", 00:21:44.624 "trtype": "tcp", 00:21:44.624 "traddr": "10.0.0.2", 00:21:44.624 "adrfam": "ipv4", 00:21:44.624 "trsvcid": "4420", 00:21:44.624 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:44.624 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:44.624 "hdgst": false, 00:21:44.624 "ddgst": false 00:21:44.624 }, 00:21:44.624 "method": "bdev_nvme_attach_controller" 00:21:44.624 },{ 00:21:44.624 "params": { 00:21:44.624 "name": "Nvme6", 00:21:44.624 "trtype": "tcp", 00:21:44.624 "traddr": "10.0.0.2", 00:21:44.624 "adrfam": "ipv4", 00:21:44.624 "trsvcid": "4420", 00:21:44.624 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:44.624 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:44.624 "hdgst": false, 00:21:44.624 "ddgst": false 00:21:44.624 }, 00:21:44.624 "method": "bdev_nvme_attach_controller" 00:21:44.624 },{ 00:21:44.624 "params": { 00:21:44.624 "name": "Nvme7", 00:21:44.624 "trtype": "tcp", 00:21:44.624 "traddr": "10.0.0.2", 00:21:44.624 "adrfam": "ipv4", 00:21:44.624 "trsvcid": "4420", 00:21:44.624 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:44.624 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:44.624 "hdgst": false, 00:21:44.624 "ddgst": false 00:21:44.624 }, 00:21:44.624 "method": "bdev_nvme_attach_controller" 00:21:44.624 },{ 00:21:44.624 "params": { 00:21:44.624 "name": "Nvme8", 00:21:44.624 "trtype": "tcp", 00:21:44.624 "traddr": "10.0.0.2", 00:21:44.624 "adrfam": "ipv4", 00:21:44.624 "trsvcid": "4420", 00:21:44.624 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:44.624 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:44.624 "hdgst": false, 00:21:44.624 "ddgst": false 00:21:44.624 }, 00:21:44.624 "method": "bdev_nvme_attach_controller" 00:21:44.624 },{ 00:21:44.624 "params": { 00:21:44.624 "name": "Nvme9", 00:21:44.624 "trtype": "tcp", 00:21:44.624 "traddr": "10.0.0.2", 00:21:44.624 "adrfam": "ipv4", 00:21:44.624 "trsvcid": "4420", 00:21:44.624 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:44.624 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:44.624 "hdgst": false, 00:21:44.624 "ddgst": false 00:21:44.624 }, 00:21:44.624 "method": "bdev_nvme_attach_controller" 00:21:44.624 },{ 00:21:44.624 "params": { 00:21:44.624 "name": "Nvme10", 00:21:44.624 "trtype": "tcp", 00:21:44.624 "traddr": "10.0.0.2", 00:21:44.624 "adrfam": "ipv4", 00:21:44.624 "trsvcid": "4420", 00:21:44.624 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:44.624 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:44.624 "hdgst": false, 00:21:44.624 "ddgst": false 00:21:44.624 }, 00:21:44.624 "method": "bdev_nvme_attach_controller" 00:21:44.624 }' 00:21:44.624 [2024-07-24 19:22:30.701960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.624 [2024-07-24 19:22:30.770062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.096 Running I/O for 10 seconds... 00:21:46.096 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.096 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:46.096 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:46.096 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.096 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:46.355 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:46.614 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:46.614 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:46.614 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:46.614 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:46.614 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.614 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:46.614 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.614 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:46.614 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:46.614 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1587875 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1587875 ']' 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1587875 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:46.874 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1587875 00:21:47.133 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:47.133 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:47.134 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1587875' 00:21:47.134 killing process with pid 1587875 00:21:47.134 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1587875 00:21:47.134 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1587875 00:21:47.134 Received shutdown signal, test time was about 0.930011 seconds 00:21:47.134 00:21:47.134 Latency(us) 00:21:47.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.134 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.134 Verification LBA range: start 0x0 length 0x400 00:21:47.134 Nvme1n1 : 0.92 278.77 17.42 0.00 0.00 227216.38 17930.65 201326.59 00:21:47.134 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.134 Verification LBA range: start 0x0 length 0x400 00:21:47.134 Nvme2n1 : 0.91 282.07 17.63 0.00 0.00 220722.38 18350.08 223136.97 00:21:47.134 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.134 Verification LBA range: start 0x0 length 0x400 00:21:47.134 Nvme3n1 : 0.89 287.57 17.97 0.00 0.00 212580.35 17930.65 202165.45 00:21:47.134 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.134 Verification LBA range: start 0x0 length 0x400 00:21:47.134 Nvme4n1 : 0.91 280.36 17.52 0.00 0.00 214780.52 17825.79 190421.40 00:21:47.134 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.134 Verification LBA range: start 0x0 length 0x400 00:21:47.134 Nvme5n1 : 0.90 285.40 17.84 0.00 0.00 207063.45 16252.93 200487.73 00:21:47.134 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.134 Verification LBA range: start 0x0 length 0x400 00:21:47.134 Nvme6n1 : 0.92 278.31 17.39 0.00 0.00 208959.69 24326.96 208876.34 00:21:47.134 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.134 Verification LBA range: start 0x0 length 0x400 00:21:47.134 Nvme7n1 : 0.93 345.00 21.56 0.00 0.00 165711.38 15309.21 201326.59 00:21:47.134 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.134 Verification LBA range: start 0x0 length 0x400 00:21:47.134 Nvme8n1 : 0.90 283.20 17.70 0.00 0.00 197673.37 19293.80 208876.34 00:21:47.134 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.134 Verification LBA range: start 0x0 length 0x400 00:21:47.134 Nvme9n1 : 0.93 275.45 17.22 0.00 0.00 199462.50 5872.03 229847.86 00:21:47.134 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:47.134 Verification LBA range: start 0x0 length 0x400 00:21:47.134 Nvme10n1 : 0.92 277.27 17.33 0.00 0.00 194954.44 16462.64 208876.34 00:21:47.134 =================================================================================================================== 00:21:47.134 Total : 2873.40 179.59 0.00 0.00 203956.32 5872.03 229847.86 00:21:47.392 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1587625 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.328 rmmod nvme_tcp 00:21:48.328 rmmod nvme_fabrics 00:21:48.328 rmmod nvme_keyring 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.328 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:48.329 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:48.329 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1587625 ']' 00:21:48.329 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1587625 00:21:48.329 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1587625 ']' 00:21:48.329 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1587625 00:21:48.329 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:48.329 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:48.329 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1587625 00:21:48.588 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:48.588 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:48.588 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1587625' 00:21:48.588 killing process with pid 1587625 00:21:48.588 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1587625 00:21:48.588 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1587625 00:21:48.847 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:48.847 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:48.847 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:48.847 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:48.847 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:48.847 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.847 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.847 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.384 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:51.384 00:21:51.384 real 0m8.299s 00:21:51.384 user 0m25.047s 00:21:51.384 sys 0m1.695s 00:21:51.384 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:51.384 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:51.384 ************************************ 00:21:51.384 END TEST nvmf_shutdown_tc2 00:21:51.385 ************************************ 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:51.385 ************************************ 00:21:51.385 START TEST nvmf_shutdown_tc3 00:21:51.385 ************************************ 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:51.385 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:51.385 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:51.385 Found net devices under 0000:af:00.0: cvl_0_0 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:51.385 Found net devices under 0000:af:00.1: cvl_0_1 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.385 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:51.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:21:51.386 00:21:51.386 --- 10.0.0.2 ping statistics --- 00:21:51.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.386 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:21:51.386 00:21:51.386 --- 10.0.0.1 ping statistics --- 00:21:51.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.386 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1589154 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1589154 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1589154 ']' 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:51.386 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:51.386 [2024-07-24 19:22:37.558462] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:51.386 [2024-07-24 19:22:37.558509] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.386 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.646 [2024-07-24 19:22:37.633306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.646 [2024-07-24 19:22:37.702880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.646 [2024-07-24 19:22:37.702922] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.646 [2024-07-24 19:22:37.702932] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.646 [2024-07-24 19:22:37.702940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.646 [2024-07-24 19:22:37.702946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.646 [2024-07-24 19:22:37.703058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.646 [2024-07-24 19:22:37.703123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.646 [2024-07-24 19:22:37.703236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.646 [2024-07-24 19:22:37.703237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.218 [2024-07-24 19:22:38.409971] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.218 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.476 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.476 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.476 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.476 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.476 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.476 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.476 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.476 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:52.476 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:52.476 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.476 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.476 Malloc1 00:21:52.476 [2024-07-24 19:22:38.520801] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.476 Malloc2 00:21:52.476 Malloc3 00:21:52.476 Malloc4 00:21:52.476 Malloc5 00:21:52.476 Malloc6 00:21:52.735 Malloc7 00:21:52.735 Malloc8 00:21:52.735 Malloc9 00:21:52.735 Malloc10 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1589463 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1589463 /var/tmp/bdevperf.sock 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1589463 ']' 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.736 { 00:21:52.736 "params": { 00:21:52.736 "name": "Nvme$subsystem", 00:21:52.736 "trtype": "$TEST_TRANSPORT", 00:21:52.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.736 "adrfam": "ipv4", 00:21:52.736 "trsvcid": "$NVMF_PORT", 00:21:52.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.736 "hdgst": ${hdgst:-false}, 00:21:52.736 "ddgst": ${ddgst:-false} 00:21:52.736 }, 00:21:52.736 "method": "bdev_nvme_attach_controller" 00:21:52.736 } 00:21:52.736 EOF 00:21:52.736 )") 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.736 { 00:21:52.736 "params": { 00:21:52.736 "name": "Nvme$subsystem", 00:21:52.736 "trtype": "$TEST_TRANSPORT", 00:21:52.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.736 "adrfam": "ipv4", 00:21:52.736 "trsvcid": "$NVMF_PORT", 00:21:52.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.736 "hdgst": ${hdgst:-false}, 00:21:52.736 "ddgst": ${ddgst:-false} 00:21:52.736 }, 00:21:52.736 "method": "bdev_nvme_attach_controller" 00:21:52.736 } 00:21:52.736 EOF 00:21:52.736 )") 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:52.736 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.995 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.995 { 00:21:52.995 "params": { 00:21:52.995 "name": "Nvme$subsystem", 00:21:52.995 "trtype": "$TEST_TRANSPORT", 00:21:52.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.995 "adrfam": "ipv4", 00:21:52.995 "trsvcid": "$NVMF_PORT", 00:21:52.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.995 "hdgst": ${hdgst:-false}, 00:21:52.996 "ddgst": ${ddgst:-false} 00:21:52.996 }, 00:21:52.996 "method": "bdev_nvme_attach_controller" 00:21:52.996 } 00:21:52.996 EOF 00:21:52.996 )") 00:21:52.996 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:52.996 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.996 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.996 { 00:21:52.996 "params": { 00:21:52.996 "name": "Nvme$subsystem", 00:21:52.996 "trtype": "$TEST_TRANSPORT", 00:21:52.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.996 "adrfam": "ipv4", 00:21:52.996 "trsvcid": "$NVMF_PORT", 00:21:52.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.996 "hdgst": ${hdgst:-false}, 00:21:52.996 "ddgst": ${ddgst:-false} 00:21:52.996 }, 00:21:52.996 "method": "bdev_nvme_attach_controller" 00:21:52.996 } 00:21:52.996 EOF 00:21:52.996 )") 00:21:52.996 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:52.996 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.996 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.996 { 00:21:52.996 "params": { 00:21:52.996 "name": "Nvme$subsystem", 00:21:52.996 "trtype": "$TEST_TRANSPORT", 00:21:52.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.996 "adrfam": "ipv4", 00:21:52.996 "trsvcid": "$NVMF_PORT", 00:21:52.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.996 "hdgst": ${hdgst:-false}, 00:21:52.996 "ddgst": ${ddgst:-false} 00:21:52.996 }, 00:21:52.996 "method": "bdev_nvme_attach_controller" 00:21:52.996 } 00:21:52.996 EOF 00:21:52.996 )") 00:21:52.996 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:52.996 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.996 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.996 { 00:21:52.996 "params": { 00:21:52.996 "name": "Nvme$subsystem", 00:21:52.996 "trtype": "$TEST_TRANSPORT", 00:21:52.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.996 "adrfam": "ipv4", 00:21:52.996 "trsvcid": "$NVMF_PORT", 00:21:52.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.996 "hdgst": ${hdgst:-false}, 00:21:52.996 "ddgst": ${ddgst:-false} 00:21:52.996 }, 00:21:52.996 "method": "bdev_nvme_attach_controller" 00:21:52.996 } 00:21:52.996 EOF 00:21:52.996 )") 00:21:52.996 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:52.996 [2024-07-24 19:22:39.001509] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:52.996 [2024-07-24 19:22:39.001562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589463 ] 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.996 { 00:21:52.996 "params": { 00:21:52.996 "name": "Nvme$subsystem", 00:21:52.996 "trtype": "$TEST_TRANSPORT", 00:21:52.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.996 "adrfam": "ipv4", 00:21:52.996 "trsvcid": "$NVMF_PORT", 00:21:52.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.996 "hdgst": ${hdgst:-false}, 00:21:52.996 "ddgst": ${ddgst:-false} 00:21:52.996 }, 00:21:52.996 "method": "bdev_nvme_attach_controller" 00:21:52.996 } 00:21:52.996 EOF 00:21:52.996 )") 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.996 { 00:21:52.996 "params": { 00:21:52.996 "name": "Nvme$subsystem", 00:21:52.996 "trtype": "$TEST_TRANSPORT", 00:21:52.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.996 "adrfam": "ipv4", 00:21:52.996 "trsvcid": "$NVMF_PORT", 00:21:52.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.996 "hdgst": ${hdgst:-false}, 00:21:52.996 "ddgst": ${ddgst:-false} 00:21:52.996 }, 00:21:52.996 "method": "bdev_nvme_attach_controller" 00:21:52.996 } 00:21:52.996 EOF 00:21:52.996 )") 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.996 { 00:21:52.996 "params": { 00:21:52.996 "name": "Nvme$subsystem", 00:21:52.996 "trtype": "$TEST_TRANSPORT", 00:21:52.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.996 "adrfam": "ipv4", 00:21:52.996 "trsvcid": "$NVMF_PORT", 00:21:52.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.996 "hdgst": ${hdgst:-false}, 00:21:52.996 "ddgst": ${ddgst:-false} 00:21:52.996 }, 00:21:52.996 "method": "bdev_nvme_attach_controller" 00:21:52.996 } 00:21:52.996 EOF 00:21:52.996 )") 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.996 { 00:21:52.996 "params": { 00:21:52.996 "name": "Nvme$subsystem", 00:21:52.996 "trtype": "$TEST_TRANSPORT", 00:21:52.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.996 "adrfam": "ipv4", 00:21:52.996 "trsvcid": "$NVMF_PORT", 00:21:52.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.996 "hdgst": ${hdgst:-false}, 00:21:52.996 "ddgst": ${ddgst:-false} 00:21:52.996 }, 00:21:52.996 "method": "bdev_nvme_attach_controller" 00:21:52.996 } 00:21:52.996 EOF 00:21:52.996 )") 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:52.996 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:52.996 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:52.996 "params": { 00:21:52.996 "name": "Nvme1", 00:21:52.996 "trtype": "tcp", 00:21:52.996 "traddr": "10.0.0.2", 00:21:52.996 "adrfam": "ipv4", 00:21:52.996 "trsvcid": "4420", 00:21:52.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.996 "hdgst": false, 00:21:52.996 "ddgst": false 00:21:52.996 }, 00:21:52.996 "method": "bdev_nvme_attach_controller" 00:21:52.996 },{ 00:21:52.996 "params": { 00:21:52.996 "name": "Nvme2", 00:21:52.996 "trtype": "tcp", 00:21:52.996 "traddr": "10.0.0.2", 00:21:52.996 "adrfam": "ipv4", 00:21:52.996 "trsvcid": "4420", 00:21:52.996 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:52.996 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:52.996 "hdgst": false, 00:21:52.996 "ddgst": false 00:21:52.996 }, 00:21:52.996 "method": "bdev_nvme_attach_controller" 00:21:52.996 },{ 00:21:52.996 "params": { 00:21:52.996 "name": "Nvme3", 00:21:52.996 "trtype": "tcp", 00:21:52.997 "traddr": "10.0.0.2", 00:21:52.997 "adrfam": "ipv4", 00:21:52.997 "trsvcid": "4420", 00:21:52.997 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:52.997 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:52.997 "hdgst": false, 00:21:52.997 "ddgst": false 00:21:52.997 }, 00:21:52.997 "method": "bdev_nvme_attach_controller" 00:21:52.997 },{ 00:21:52.997 "params": { 00:21:52.997 "name": "Nvme4", 00:21:52.997 "trtype": "tcp", 00:21:52.997 "traddr": "10.0.0.2", 00:21:52.997 "adrfam": "ipv4", 00:21:52.997 "trsvcid": "4420", 00:21:52.997 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:52.997 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:52.997 "hdgst": false, 00:21:52.997 "ddgst": false 00:21:52.997 }, 00:21:52.997 "method": "bdev_nvme_attach_controller" 00:21:52.997 },{ 00:21:52.997 "params": { 00:21:52.997 "name": "Nvme5", 00:21:52.997 "trtype": "tcp", 00:21:52.997 "traddr": "10.0.0.2", 00:21:52.997 "adrfam": "ipv4", 00:21:52.997 "trsvcid": "4420", 00:21:52.997 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:52.997 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:52.997 "hdgst": false, 00:21:52.997 "ddgst": false 00:21:52.997 }, 00:21:52.997 "method": "bdev_nvme_attach_controller" 00:21:52.997 },{ 00:21:52.997 "params": { 00:21:52.997 "name": "Nvme6", 00:21:52.997 "trtype": "tcp", 00:21:52.997 "traddr": "10.0.0.2", 00:21:52.997 "adrfam": "ipv4", 00:21:52.997 "trsvcid": "4420", 00:21:52.997 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:52.997 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:52.997 "hdgst": false, 00:21:52.997 "ddgst": false 00:21:52.997 }, 00:21:52.997 "method": "bdev_nvme_attach_controller" 00:21:52.997 },{ 00:21:52.997 "params": { 00:21:52.997 "name": "Nvme7", 00:21:52.997 "trtype": "tcp", 00:21:52.997 "traddr": "10.0.0.2", 00:21:52.997 "adrfam": "ipv4", 00:21:52.997 "trsvcid": "4420", 00:21:52.997 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:52.997 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:52.997 "hdgst": false, 00:21:52.997 "ddgst": false 00:21:52.997 }, 00:21:52.997 "method": "bdev_nvme_attach_controller" 00:21:52.997 },{ 00:21:52.997 "params": { 00:21:52.997 "name": "Nvme8", 00:21:52.997 "trtype": "tcp", 00:21:52.997 "traddr": "10.0.0.2", 00:21:52.997 "adrfam": "ipv4", 00:21:52.997 "trsvcid": "4420", 00:21:52.997 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:52.997 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:52.997 "hdgst": false, 00:21:52.997 "ddgst": false 00:21:52.997 }, 00:21:52.997 "method": "bdev_nvme_attach_controller" 00:21:52.997 },{ 00:21:52.997 "params": { 00:21:52.997 "name": "Nvme9", 00:21:52.997 "trtype": "tcp", 00:21:52.997 "traddr": "10.0.0.2", 00:21:52.997 "adrfam": "ipv4", 00:21:52.997 "trsvcid": "4420", 00:21:52.997 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:52.997 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:52.997 "hdgst": false, 00:21:52.997 "ddgst": false 00:21:52.997 }, 00:21:52.997 "method": "bdev_nvme_attach_controller" 00:21:52.997 },{ 00:21:52.997 "params": { 00:21:52.997 "name": "Nvme10", 00:21:52.997 "trtype": "tcp", 00:21:52.997 "traddr": "10.0.0.2", 00:21:52.997 "adrfam": "ipv4", 00:21:52.997 "trsvcid": "4420", 00:21:52.997 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:52.997 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:52.997 "hdgst": false, 00:21:52.997 "ddgst": false 00:21:52.997 }, 00:21:52.997 "method": "bdev_nvme_attach_controller" 00:21:52.997 }' 00:21:52.997 [2024-07-24 19:22:39.072978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.997 [2024-07-24 19:22:39.140959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.903 Running I/O for 10 seconds... 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:54.903 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:55.162 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:55.162 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:55.162 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:55.162 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:55.162 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.162 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:55.162 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.162 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:55.162 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:55.162 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1589154 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1589154 ']' 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1589154 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1589154 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1589154' 00:21:55.430 killing process with pid 1589154 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1589154 00:21:55.430 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1589154 00:21:55.430 [2024-07-24 19:22:41.619607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.619999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.430 [2024-07-24 19:22:41.620007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.620240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c02d0 is same with the state(5) to be set 00:21:55.431 [2024-07-24 19:22:41.622347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.431 [2024-07-24 19:22:41.622838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.431 [2024-07-24 19:22:41.622848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.622859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.622868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.622878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.622887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.622898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.622907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.622918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.622928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.622938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.622947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.622958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.622967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.622978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.622987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.622997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.432 [2024-07-24 19:22:41.623566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.432 [2024-07-24 19:22:41.623576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.433 [2024-07-24 19:22:41.623586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 [2024-07-24 19:22:41.623597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.433 [2024-07-24 19:22:41.623606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 [2024-07-24 19:22:41.623616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.433 [2024-07-24 19:22:41.623627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 [2024-07-24 19:22:41.623639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.433 [2024-07-24 19:22:41.623648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 [2024-07-24 19:22:41.623659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.433 [2024-07-24 19:22:41.623668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 [2024-07-24 19:22:41.623696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.433 [2024-07-24 19:22:41.624042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624094] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ae8770 was disconnected and freed. reset controller. 00:21:55.433 [2024-07-24 19:22:41.624102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.433 [2024-07-24 19:22:41.624186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 19:22:41.624195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.433 [2024-07-24 19:22:41.624214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 [2024-07-24 19:22:41.624224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with [2024-07-24 19:22:41.624228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:21:55.433 id:0 cdw10:00000000 cdw11:00000000 00:21:55.433 [2024-07-24 19:22:41.624241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 [2024-07-24 19:22:41.624250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.433 [2024-07-24 19:22:41.624259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 [2024-07-24 19:22:41.624268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c55140 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.433 [2024-07-24 19:22:41.624339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 [2024-07-24 19:22:41.624348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.433 [2024-07-24 19:22:41.624358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 [2024-07-24 19:22:41.624367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-24 19:22:41.624376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:55.433 the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 19:22:41.624387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with [2024-07-24 19:22:41.624397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:21:55.433 id:0 cdw10:00000000 cdw11:00000000 00:21:55.433 [2024-07-24 19:22:41.624406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.433 [2024-07-24 19:22:41.624416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9ec30 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.433 [2024-07-24 19:22:41.624484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.624629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c0790 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.626081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efe60 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.626112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efe60 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.626122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efe60 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.626132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efe60 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.626141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efe60 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.626150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efe60 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.626158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efe60 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.626167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efe60 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.626176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efe60 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.626185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efe60 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:55.434 [2024-07-24 19:22:41.627144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c55140 (9): Bad file descriptor 00:21:55.434 [2024-07-24 19:22:41.627164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.434 [2024-07-24 19:22:41.627321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:22:41.627331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.434 the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.434 [2024-07-24 19:22:41.627353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.434 [2024-07-24 19:22:41.627362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.434 [2024-07-24 19:22:41.627371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.434 [2024-07-24 19:22:41.627381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:12[2024-07-24 19:22:41.627390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.434 the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with [2024-07-24 19:22:41.627401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:55.434 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.434 [2024-07-24 19:22:41.627412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.434 [2024-07-24 19:22:41.627422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.434 [2024-07-24 19:22:41.627432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.434 [2024-07-24 19:22:41.627441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.434 [2024-07-24 19:22:41.627449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with [2024-07-24 19:22:41.627460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128the state(5) to be set 00:21:55.435 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with [2024-07-24 19:22:41.627528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128the state(5) to be set 00:21:55.435 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:22:41.627586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:1[2024-07-24 19:22:41.627643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:22:41.627653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:1[2024-07-24 19:22:41.627710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:22:41.627724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0340 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 the state(5) to be set 00:21:55.435 [2024-07-24 19:22:41.627737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.435 [2024-07-24 19:22:41.627813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.435 [2024-07-24 19:22:41.627824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.627833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.627844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.627853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.627863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.627873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.627883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.627892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.627903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.627912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.627922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.627932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.627942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.627952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.627962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.627971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.627981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.627991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0800 is same with the state(5) to be set 00:21:55.436 [2024-07-24 19:22:41.628383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0800 is same with the state(5) to be set 00:21:55.436 [2024-07-24 19:22:41.628393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.436 [2024-07-24 19:22:41.628526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.436 [2024-07-24 19:22:41.628536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.628547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.628557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.628567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.628576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.628586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.628596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.628606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.628615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.628626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.628635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.628646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.628655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.628740] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1af1590 was disconnected and freed. reset controller. 00:21:55.437 [2024-07-24 19:22:41.629108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with [2024-07-24 19:22:41.629308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:1the state(5) to be set 00:21:55.437 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with [2024-07-24 19:22:41.629320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:55.437 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:1[2024-07-24 19:22:41.629397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:22:41.629409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1[2024-07-24 19:22:41.629465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:22:41.629475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.437 [2024-07-24 19:22:41.629515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.437 [2024-07-24 19:22:41.629520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.437 [2024-07-24 19:22:41.629524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1[2024-07-24 19:22:41.629533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:22:41.629544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1[2024-07-24 19:22:41.629600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:22:41.629611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:22:41.629723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with [2024-07-24 19:22:41.629746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:55.438 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1[2024-07-24 19:22:41.629802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:22:41.629812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with [2024-07-24 19:22:41.629869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1the state(5) to be set 00:21:55.438 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with [2024-07-24 19:22:41.629881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:55.438 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.438 [2024-07-24 19:22:41.629917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.438 [2024-07-24 19:22:41.629925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.438 [2024-07-24 19:22:41.629926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0b70 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.629938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.439 [2024-07-24 19:22:41.629948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.439 [2024-07-24 19:22:41.629958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.439 [2024-07-24 19:22:41.629969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.439 [2024-07-24 19:22:41.629980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.439 [2024-07-24 19:22:41.629989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.439 [2024-07-24 19:22:41.629999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.439 [2024-07-24 19:22:41.630008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.439 [2024-07-24 19:22:41.630019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.439 [2024-07-24 19:22:41.630029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.439 [2024-07-24 19:22:41.630039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.439 [2024-07-24 19:22:41.630048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.439 [2024-07-24 19:22:41.630058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.439 [2024-07-24 19:22:41.630067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.439 [2024-07-24 19:22:41.630078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.439 [2024-07-24 19:22:41.630087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.439 [2024-07-24 19:22:41.630097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.439 [2024-07-24 19:22:41.630106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.439 [2024-07-24 19:22:41.630117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.439 [2024-07-24 19:22:41.630126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.439 [2024-07-24 19:22:41.630136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.439 [2024-07-24 19:22:41.630146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.439 [2024-07-24 19:22:41.630156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.439 [2024-07-24 19:22:41.630165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.439 [2024-07-24 19:22:41.631010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.439 [2024-07-24 19:22:41.631432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.631557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1030 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.632988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.633038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.633091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.633135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.633180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.633225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.633268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.633313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.633358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.633402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.633448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.633494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.440 [2024-07-24 19:22:41.644134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.440 [2024-07-24 19:22:41.644150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.440 [2024-07-24 19:22:41.644165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.440 [2024-07-24 19:22:41.644177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.440 [2024-07-24 19:22:41.644193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.440 [2024-07-24 19:22:41.644205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.440 [2024-07-24 19:22:41.644220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.440 [2024-07-24 19:22:41.644232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.440 [2024-07-24 19:22:41.644247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.440 [2024-07-24 19:22:41.644259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.441 [2024-07-24 19:22:41.644285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.441 [2024-07-24 19:22:41.644312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.441 [2024-07-24 19:22:41.644338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.441 [2024-07-24 19:22:41.644364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.441 [2024-07-24 19:22:41.644391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.441 [2024-07-24 19:22:41.644418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.441 [2024-07-24 19:22:41.644444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.441 [2024-07-24 19:22:41.644470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.441 [2024-07-24 19:22:41.644496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.441 [2024-07-24 19:22:41.644524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.441 [2024-07-24 19:22:41.644616] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a99620 was disconnected and freed. reset controller. 00:21:55.441 [2024-07-24 19:22:41.644909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.644932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.644959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.644984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.644997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b81820 is same with the state(5) to be set 00:21:55.441 [2024-07-24 19:22:41.645052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69300 is same with the state(5) to be set 00:21:55.441 [2024-07-24 19:22:41.645196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1610 is same with the state(5) to be set 00:21:55.441 [2024-07-24 19:22:41.645334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aad9a0 is same with the state(5) to be set 00:21:55.441 [2024-07-24 19:22:41.645464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac1420 is same with the state(5) to be set 00:21:55.441 [2024-07-24 19:22:41.645598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.441 [2024-07-24 19:22:41.645652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.441 [2024-07-24 19:22:41.645664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.442 [2024-07-24 19:22:41.645676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.442 [2024-07-24 19:22:41.645688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.442 [2024-07-24 19:22:41.645700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7ff70 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.645732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9ec30 (9): Bad file descriptor 00:21:55.442 [2024-07-24 19:22:41.645766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.442 [2024-07-24 19:22:41.645780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.442 [2024-07-24 19:22:41.645793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.442 [2024-07-24 19:22:41.645805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.442 [2024-07-24 19:22:41.645817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.442 [2024-07-24 19:22:41.645829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.442 [2024-07-24 19:22:41.645842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.442 [2024-07-24 19:22:41.645854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.442 [2024-07-24 19:22:41.645866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3190 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:55.442 [2024-07-24 19:22:41.648902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:55.442 [2024-07-24 19:22:41.648926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b81820 (9): [2024-07-24 19:22:41.648937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with Bad file descriptor 00:21:55.442 the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aad9a0 (9): Bad file descriptor 00:21:55.442 [2024-07-24 19:22:41.648969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.648992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1510 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.442 [2024-07-24 19:22:41.649187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c55140 with addr=10.0.0.2, port=4420 00:21:55.442 [2024-07-24 19:22:41.649201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c55140 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c55140 (9): Bad file descriptor 00:21:55.442 [2024-07-24 19:22:41.649697] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:55.442 [2024-07-24 19:22:41.649728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.649996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.650004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.650013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.650022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.650030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.650038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.442 [2024-07-24 19:22:41.650047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f19d0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.650840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.443 [2024-07-24 19:22:41.650869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aad9a0 with addr=10.0.0.2, port=4420 00:21:55.443 [2024-07-24 19:22:41.650882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aad9a0 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.651084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.443 [2024-07-24 19:22:41.651101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b81820 with addr=10.0.0.2, port=4420 00:21:55.443 [2024-07-24 19:22:41.651113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b81820 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.651127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:55.443 [2024-07-24 19:22:41.651139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:55.443 [2024-07-24 19:22:41.651153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:55.443 [2024-07-24 19:22:41.651224] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:55.443 [2024-07-24 19:22:41.651292] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:55.443 [2024-07-24 19:22:41.651355] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:55.443 [2024-07-24 19:22:41.651407] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:55.443 [2024-07-24 19:22:41.651527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.443 [2024-07-24 19:22:41.651545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aad9a0 (9): Bad file descriptor 00:21:55.443 [2024-07-24 19:22:41.651561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b81820 (9): Bad file descriptor 00:21:55.443 [2024-07-24 19:22:41.651757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:55.443 [2024-07-24 19:22:41.651774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:55.443 [2024-07-24 19:22:41.651787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:55.443 [2024-07-24 19:22:41.651805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:55.443 [2024-07-24 19:22:41.651817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:55.443 [2024-07-24 19:22:41.651831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:55.443 [2024-07-24 19:22:41.651980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.443 [2024-07-24 19:22:41.651994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.443 [2024-07-24 19:22:41.652067] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:55.443 [2024-07-24 19:22:41.652208] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:55.443 [2024-07-24 19:22:41.654900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69300 (9): Bad file descriptor 00:21:55.443 [2024-07-24 19:22:41.654949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.443 [2024-07-24 19:22:41.654966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.443 [2024-07-24 19:22:41.654982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.443 [2024-07-24 19:22:41.654995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.443 [2024-07-24 19:22:41.655008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.443 [2024-07-24 19:22:41.655021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.443 [2024-07-24 19:22:41.655035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.443 [2024-07-24 19:22:41.655051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.443 [2024-07-24 19:22:41.655064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b84210 is same with the state(5) to be set 00:21:55.443 [2024-07-24 19:22:41.655088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a1610 (9): Bad file descriptor 00:21:55.443 [2024-07-24 19:22:41.655113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac1420 (9): Bad file descriptor 00:21:55.443 [2024-07-24 19:22:41.655137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7ff70 (9): Bad file descriptor 00:21:55.443 [2024-07-24 19:22:41.655170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac3190 (9): Bad file descriptor 00:21:55.443 [2024-07-24 19:22:41.655299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.443 [2024-07-24 19:22:41.655316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.443 [2024-07-24 19:22:41.655335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.443 [2024-07-24 19:22:41.655348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.443 [2024-07-24 19:22:41.655363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.443 [2024-07-24 19:22:41.655376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.443 [2024-07-24 19:22:41.655391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.443 [2024-07-24 19:22:41.655404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.443 [2024-07-24 19:22:41.655420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.443 [2024-07-24 19:22:41.655432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.443 [2024-07-24 19:22:41.655447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.443 [2024-07-24 19:22:41.655460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.443 [2024-07-24 19:22:41.655475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.443 [2024-07-24 19:22:41.655488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.655979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.655994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.444 [2024-07-24 19:22:41.656373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.444 [2024-07-24 19:22:41.656388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.656979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.656994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.657010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.657023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.657038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.657050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.657065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.657078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.657093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.445 [2024-07-24 19:22:41.657106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.445 [2024-07-24 19:22:41.657119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aef250 is same with the state(5) to be set 00:21:55.445 [2024-07-24 19:22:41.658496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:55.445 [2024-07-24 19:22:41.658587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:55.445 [2024-07-24 19:22:41.658926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.445 [2024-07-24 19:22:41.658947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9ec30 with addr=10.0.0.2, port=4420 00:21:55.445 [2024-07-24 19:22:41.658961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9ec30 is same with the state(5) to be set 00:21:55.445 [2024-07-24 19:22:41.659537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.445 [2024-07-24 19:22:41.659556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c55140 with addr=10.0.0.2, port=4420 00:21:55.445 [2024-07-24 19:22:41.659569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c55140 is same with the state(5) to be set 00:21:55.445 [2024-07-24 19:22:41.659584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9ec30 (9): Bad file descriptor 00:21:55.445 [2024-07-24 19:22:41.659644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c55140 (9): Bad file descriptor 00:21:55.445 [2024-07-24 19:22:41.659660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:55.445 [2024-07-24 19:22:41.659672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:55.445 [2024-07-24 19:22:41.659686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:55.445 [2024-07-24 19:22:41.659751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.445 [2024-07-24 19:22:41.659766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:55.445 [2024-07-24 19:22:41.659778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:55.445 [2024-07-24 19:22:41.659791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:55.445 [2024-07-24 19:22:41.659854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.445 [2024-07-24 19:22:41.659910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:55.445 [2024-07-24 19:22:41.659930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:55.445 [2024-07-24 19:22:41.660296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.445 [2024-07-24 19:22:41.660311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b81820 with addr=10.0.0.2, port=4420 00:21:55.445 [2024-07-24 19:22:41.660321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b81820 is same with the state(5) to be set 00:21:55.711 [2024-07-24 19:22:41.660547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.711 [2024-07-24 19:22:41.660561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aad9a0 with addr=10.0.0.2, port=4420 00:21:55.711 [2024-07-24 19:22:41.660570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aad9a0 is same with the state(5) to be set 00:21:55.711 [2024-07-24 19:22:41.660606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b81820 (9): Bad file descriptor 00:21:55.711 [2024-07-24 19:22:41.660621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aad9a0 (9): Bad file descriptor 00:21:55.711 [2024-07-24 19:22:41.660657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:55.711 [2024-07-24 19:22:41.660668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:55.711 [2024-07-24 19:22:41.660678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:55.711 [2024-07-24 19:22:41.660690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:55.711 [2024-07-24 19:22:41.660700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:55.711 [2024-07-24 19:22:41.660709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:55.711 [2024-07-24 19:22:41.660750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.711 [2024-07-24 19:22:41.660759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.711 [2024-07-24 19:22:41.664940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b84210 (9): Bad file descriptor 00:21:55.711 [2024-07-24 19:22:41.665054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.712 [2024-07-24 19:22:41.665820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.712 [2024-07-24 19:22:41.665830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.665841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.665851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.665862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.665871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.665883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.665893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.665904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.665914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.665926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.665935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.665946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.665957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.665969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.665978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.665989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.665998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.666393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.666403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00e0 is same with the state(5) to be set 00:21:55.713 [2024-07-24 19:22:41.667432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.667451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.667464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.667474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.667486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.667495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.667509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.667519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.667530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.667540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.667552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.667561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.667572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.667582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.667593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.713 [2024-07-24 19:22:41.667603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.713 [2024-07-24 19:22:41.667614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.667987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.667998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.714 [2024-07-24 19:22:41.668377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.714 [2024-07-24 19:22:41.668386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.668784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.668794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98130 is same with the state(5) to be set 00:21:55.715 [2024-07-24 19:22:41.669823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.669839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.669852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.669865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.669876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.669886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.669897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.669907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.669918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.669928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.669939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.669948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.669960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.669969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.669980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.669990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.670001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.670011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.670022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.670031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.670043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.670052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.670063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.670073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.670083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.670093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.670104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.670113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.670126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.670135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.670146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.670156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.670167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.715 [2024-07-24 19:22:41.670176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.715 [2024-07-24 19:22:41.670187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.716 [2024-07-24 19:22:41.670796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.716 [2024-07-24 19:22:41.670818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.670828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.670838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.670847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.670858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.670868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.670878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.670887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.670898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.670909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.670919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.670929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.670940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.670949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.670959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.670969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.670980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.670990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.671000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.671009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.671020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.671029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.671039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.671049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.671059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.671068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.671079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.671088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.671099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.671108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.671118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.671128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.671138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.671148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.671158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9ab10 is same with the state(5) to be set 00:21:55.717 [2024-07-24 19:22:41.672170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.717 [2024-07-24 19:22:41.672544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.717 [2024-07-24 19:22:41.672553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.672980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.672991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.718 [2024-07-24 19:22:41.673266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.718 [2024-07-24 19:22:41.673277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.673287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.673297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.673307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.673317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.673326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.673337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.673346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.673356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.673365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.673376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.673386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.673397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.673407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.673418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.673427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.673439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.673448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.673458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.673468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.673477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cc500 is same with the state(5) to be set 00:21:55.719 [2024-07-24 19:22:41.674474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.674986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.674997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.719 [2024-07-24 19:22:41.675006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.719 [2024-07-24 19:22:41.675016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.720 [2024-07-24 19:22:41.675732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.720 [2024-07-24 19:22:41.675742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.675751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.675762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.675771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.675781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2573fd0 is same with the state(5) to be set 00:21:55.721 [2024-07-24 19:22:41.676759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:55.721 [2024-07-24 19:22:41.676776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:55.721 [2024-07-24 19:22:41.676786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:55.721 [2024-07-24 19:22:41.676797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:55.721 [2024-07-24 19:22:41.676870] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:55.721 [2024-07-24 19:22:41.676940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:55.721 [2024-07-24 19:22:41.677283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.721 [2024-07-24 19:22:41.677299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac3190 with addr=10.0.0.2, port=4420 00:21:55.721 [2024-07-24 19:22:41.677309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3190 is same with the state(5) to be set 00:21:55.721 [2024-07-24 19:22:41.677532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.721 [2024-07-24 19:22:41.677544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac1420 with addr=10.0.0.2, port=4420 00:21:55.721 [2024-07-24 19:22:41.677554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac1420 is same with the state(5) to be set 00:21:55.721 [2024-07-24 19:22:41.677872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.721 [2024-07-24 19:22:41.677886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7ff70 with addr=10.0.0.2, port=4420 00:21:55.721 [2024-07-24 19:22:41.677895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7ff70 is same with the state(5) to be set 00:21:55.721 [2024-07-24 19:22:41.678169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.721 [2024-07-24 19:22:41.678181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a1610 with addr=10.0.0.2, port=4420 00:21:55.721 [2024-07-24 19:22:41.678191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a1610 is same with the state(5) to be set 00:21:55.721 [2024-07-24 19:22:41.679298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.721 [2024-07-24 19:22:41.679744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.721 [2024-07-24 19:22:41.679753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.679764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.679773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.679784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.679794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.679804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.679814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.679824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.679833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.679845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.679855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.679865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.679874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.679885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.679894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.679905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.679915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.679925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.679935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.679946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.679955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.679966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.679975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.679986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.679995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.722 [2024-07-24 19:22:41.680467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.722 [2024-07-24 19:22:41.680476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.723 [2024-07-24 19:22:41.680487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.723 [2024-07-24 19:22:41.680496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.723 [2024-07-24 19:22:41.680506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.723 [2024-07-24 19:22:41.680516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.723 [2024-07-24 19:22:41.680526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.723 [2024-07-24 19:22:41.680536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.723 [2024-07-24 19:22:41.680546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.723 [2024-07-24 19:22:41.680555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.723 [2024-07-24 19:22:41.680566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.723 [2024-07-24 19:22:41.680575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.723 [2024-07-24 19:22:41.680586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.723 [2024-07-24 19:22:41.680596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.723 [2024-07-24 19:22:41.680607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7280 is same with the state(5) to be set 00:21:55.723 [2024-07-24 19:22:41.681884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:55.723 [2024-07-24 19:22:41.681904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:55.723 [2024-07-24 19:22:41.681915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:55.723 [2024-07-24 19:22:41.681926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:55.723 task offset: 27904 on job bdev=Nvme10n1 fails 00:21:55.723 00:21:55.723 Latency(us) 00:21:55.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.723 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.723 Job: Nvme1n1 ended in about 0.91 seconds with error 00:21:55.723 Verification LBA range: start 0x0 length 0x400 00:21:55.723 Nvme1n1 : 0.91 210.04 13.13 70.01 0.00 226407.83 16357.79 204682.04 00:21:55.723 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.723 Job: Nvme2n1 ended in about 0.92 seconds with error 00:21:55.723 Verification LBA range: start 0x0 length 0x400 00:21:55.723 Nvme2n1 : 0.92 207.96 13.00 69.32 0.00 224955.19 16777.22 221459.25 00:21:55.723 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.723 Job: Nvme3n1 ended in about 0.90 seconds with error 00:21:55.723 Verification LBA range: start 0x0 length 0x400 00:21:55.723 Nvme3n1 : 0.90 283.52 17.72 70.88 0.00 172868.73 10590.62 204682.04 00:21:55.723 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.723 Job: Nvme4n1 ended in about 0.93 seconds with error 00:21:55.723 Verification LBA range: start 0x0 length 0x400 00:21:55.723 Nvme4n1 : 0.93 207.42 12.96 69.14 0.00 218105.65 17511.22 221459.25 00:21:55.723 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.723 Job: Nvme5n1 ended in about 0.90 seconds with error 00:21:55.723 Verification LBA range: start 0x0 length 0x400 00:21:55.723 Nvme5n1 : 0.90 212.35 13.27 70.78 0.00 209013.96 18350.08 218103.81 00:21:55.723 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.723 Job: Nvme6n1 ended in about 0.93 seconds with error 00:21:55.723 Verification LBA range: start 0x0 length 0x400 00:21:55.723 Nvme6n1 : 0.93 214.44 13.40 68.97 0.00 205671.70 8860.47 203004.31 00:21:55.723 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.723 Job: Nvme7n1 ended in about 0.93 seconds with error 00:21:55.723 Verification LBA range: start 0x0 length 0x400 00:21:55.723 Nvme7n1 : 0.93 206.38 12.90 68.79 0.00 208100.15 18035.51 211392.92 00:21:55.723 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.723 Job: Nvme8n1 ended in about 0.93 seconds with error 00:21:55.723 Verification LBA range: start 0x0 length 0x400 00:21:55.723 Nvme8n1 : 0.93 205.87 12.87 68.62 0.00 204956.67 17406.36 203004.31 00:21:55.723 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.723 Job: Nvme9n1 ended in about 0.94 seconds with error 00:21:55.723 Verification LBA range: start 0x0 length 0x400 00:21:55.723 Nvme9n1 : 0.94 204.81 12.80 68.27 0.00 202432.51 17616.08 208037.48 00:21:55.723 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.723 Job: Nvme10n1 ended in about 0.88 seconds with error 00:21:55.723 Verification LBA range: start 0x0 length 0x400 00:21:55.723 Nvme10n1 : 0.88 217.40 13.59 72.47 0.00 185131.32 4141.88 228170.14 00:21:55.723 =================================================================================================================== 00:21:55.723 Total : 2170.19 135.64 697.26 0.00 204963.93 4141.88 228170.14 00:21:55.723 [2024-07-24 19:22:41.702657] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:55.723 [2024-07-24 19:22:41.702696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:55.723 [2024-07-24 19:22:41.703017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.723 [2024-07-24 19:22:41.703036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69300 with addr=10.0.0.2, port=4420 00:21:55.723 [2024-07-24 19:22:41.703048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69300 is same with the state(5) to be set 00:21:55.723 [2024-07-24 19:22:41.703066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac3190 (9): Bad file descriptor 00:21:55.723 [2024-07-24 19:22:41.703079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac1420 (9): Bad file descriptor 00:21:55.723 [2024-07-24 19:22:41.703091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7ff70 (9): Bad file descriptor 00:21:55.723 [2024-07-24 19:22:41.703102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a1610 (9): Bad file descriptor 00:21:55.723 [2024-07-24 19:22:41.703504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.723 [2024-07-24 19:22:41.703519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9ec30 with addr=10.0.0.2, port=4420 00:21:55.723 [2024-07-24 19:22:41.703530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9ec30 is same with the state(5) to be set 00:21:55.723 [2024-07-24 19:22:41.703809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.723 [2024-07-24 19:22:41.703822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c55140 with addr=10.0.0.2, port=4420 00:21:55.723 [2024-07-24 19:22:41.703831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c55140 is same with the state(5) to be set 00:21:55.723 [2024-07-24 19:22:41.704126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.723 [2024-07-24 19:22:41.704138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aad9a0 with addr=10.0.0.2, port=4420 00:21:55.723 [2024-07-24 19:22:41.704147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aad9a0 is same with the state(5) to be set 00:21:55.723 [2024-07-24 19:22:41.704364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.723 [2024-07-24 19:22:41.704375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b81820 with addr=10.0.0.2, port=4420 00:21:55.723 [2024-07-24 19:22:41.704385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b81820 is same with the state(5) to be set 00:21:55.723 [2024-07-24 19:22:41.704604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.723 [2024-07-24 19:22:41.704615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b84210 with addr=10.0.0.2, port=4420 00:21:55.723 [2024-07-24 19:22:41.704625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b84210 is same with the state(5) to be set 00:21:55.723 [2024-07-24 19:22:41.704637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69300 (9): Bad file descriptor 00:21:55.723 [2024-07-24 19:22:41.704648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:55.723 [2024-07-24 19:22:41.704657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:55.723 [2024-07-24 19:22:41.704667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:55.723 [2024-07-24 19:22:41.704681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:55.723 [2024-07-24 19:22:41.704695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:55.723 [2024-07-24 19:22:41.704704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:55.723 [2024-07-24 19:22:41.704719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:55.723 [2024-07-24 19:22:41.704728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:55.723 [2024-07-24 19:22:41.704737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:55.723 [2024-07-24 19:22:41.704747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:55.723 [2024-07-24 19:22:41.704756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:55.723 [2024-07-24 19:22:41.704765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:55.724 [2024-07-24 19:22:41.704794] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:55.724 [2024-07-24 19:22:41.704808] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:55.724 [2024-07-24 19:22:41.704821] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:55.724 [2024-07-24 19:22:41.704833] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:55.724 [2024-07-24 19:22:41.704845] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:55.724 [2024-07-24 19:22:41.705132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.724 [2024-07-24 19:22:41.705143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.724 [2024-07-24 19:22:41.705151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.724 [2024-07-24 19:22:41.705158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.724 [2024-07-24 19:22:41.705167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9ec30 (9): Bad file descriptor 00:21:55.724 [2024-07-24 19:22:41.705178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c55140 (9): Bad file descriptor 00:21:55.724 [2024-07-24 19:22:41.705189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aad9a0 (9): Bad file descriptor 00:21:55.724 [2024-07-24 19:22:41.705200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b81820 (9): Bad file descriptor 00:21:55.724 [2024-07-24 19:22:41.705211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b84210 (9): Bad file descriptor 00:21:55.724 [2024-07-24 19:22:41.705221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:55.724 [2024-07-24 19:22:41.705229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:55.724 [2024-07-24 19:22:41.705238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:55.724 [2024-07-24 19:22:41.705518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.724 [2024-07-24 19:22:41.705534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:55.724 [2024-07-24 19:22:41.705542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:55.724 [2024-07-24 19:22:41.705551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:55.724 [2024-07-24 19:22:41.705562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:55.724 [2024-07-24 19:22:41.705573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:55.724 [2024-07-24 19:22:41.705582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:55.724 [2024-07-24 19:22:41.705593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:55.724 [2024-07-24 19:22:41.705601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:55.724 [2024-07-24 19:22:41.705610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:55.724 [2024-07-24 19:22:41.705620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:55.724 [2024-07-24 19:22:41.705628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:55.724 [2024-07-24 19:22:41.705637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:55.724 [2024-07-24 19:22:41.705647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:55.724 [2024-07-24 19:22:41.705655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:55.724 [2024-07-24 19:22:41.705665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:55.724 [2024-07-24 19:22:41.705700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.724 [2024-07-24 19:22:41.705709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.724 [2024-07-24 19:22:41.705724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.724 [2024-07-24 19:22:41.705731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.724 [2024-07-24 19:22:41.705739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.983 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:55.983 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1589463 00:21:56.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1589463) - No such process 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:56.918 rmmod nvme_tcp 00:21:56.918 rmmod nvme_fabrics 00:21:56.918 rmmod nvme_keyring 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.918 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.458 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:59.458 00:21:59.458 real 0m8.102s 00:21:59.458 user 0m19.964s 00:21:59.458 sys 0m1.649s 00:21:59.458 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.458 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:59.458 ************************************ 00:21:59.458 END TEST nvmf_shutdown_tc3 00:21:59.458 ************************************ 00:21:59.458 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:59.458 00:21:59.458 real 0m33.263s 00:21:59.458 user 1m19.573s 00:21:59.458 sys 0m10.613s 00:21:59.458 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.458 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:59.458 ************************************ 00:21:59.458 END TEST nvmf_shutdown 00:21:59.458 ************************************ 00:21:59.458 19:22:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:21:59.458 00:21:59.458 real 11m12.449s 00:21:59.458 user 23m45.930s 00:21:59.458 sys 3m58.315s 00:21:59.458 19:22:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.458 19:22:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:59.458 ************************************ 00:21:59.458 END TEST nvmf_target_extra 00:21:59.458 ************************************ 00:21:59.458 19:22:45 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:59.458 19:22:45 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:59.458 19:22:45 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:59.458 19:22:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.458 ************************************ 00:21:59.458 START TEST nvmf_host 00:21:59.458 ************************************ 00:21:59.458 19:22:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:59.458 * Looking for test storage... 00:21:59.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:59.458 19:22:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.459 ************************************ 00:21:59.459 START TEST nvmf_multicontroller 00:21:59.459 ************************************ 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:59.459 * Looking for test storage... 00:21:59.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.459 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:59.719 19:22:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:06.291 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:06.291 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:06.291 Found net devices under 0000:af:00.0: cvl_0_0 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:06.291 Found net devices under 0000:af:00.1: cvl_0_1 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.291 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:06.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:22:06.292 00:22:06.292 --- 10.0.0.2 ping statistics --- 00:22:06.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.292 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:22:06.292 00:22:06.292 --- 10.0.0.1 ping statistics --- 00:22:06.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.292 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1593780 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1593780 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1593780 ']' 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.292 19:22:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.551 [2024-07-24 19:22:52.536574] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:06.551 [2024-07-24 19:22:52.536625] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.551 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.551 [2024-07-24 19:22:52.610402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:06.551 [2024-07-24 19:22:52.684979] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.551 [2024-07-24 19:22:52.685014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.551 [2024-07-24 19:22:52.685024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.551 [2024-07-24 19:22:52.685033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.551 [2024-07-24 19:22:52.685040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.551 [2024-07-24 19:22:52.685144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.551 [2024-07-24 19:22:52.685228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.551 [2024-07-24 19:22:52.685230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.120 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.120 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:07.120 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.120 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:07.120 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 [2024-07-24 19:22:53.388038] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 Malloc0 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 [2024-07-24 19:22:53.454331] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 [2024-07-24 19:22:53.462293] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 Malloc1 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1594053 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1594053 /var/tmp/bdevperf.sock 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1594053 ']' 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.379 19:22:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.315 NVMe0n1 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.315 1 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.315 request: 00:22:08.315 { 00:22:08.315 "name": "NVMe0", 00:22:08.315 "trtype": "tcp", 00:22:08.315 "traddr": "10.0.0.2", 00:22:08.315 "adrfam": "ipv4", 00:22:08.315 "trsvcid": "4420", 00:22:08.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.315 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:08.315 "hostaddr": "10.0.0.2", 00:22:08.315 "hostsvcid": "60000", 00:22:08.315 "prchk_reftag": false, 00:22:08.315 "prchk_guard": false, 00:22:08.315 "hdgst": false, 00:22:08.315 "ddgst": false, 00:22:08.315 "method": "bdev_nvme_attach_controller", 00:22:08.315 "req_id": 1 00:22:08.315 } 00:22:08.315 Got JSON-RPC error response 00:22:08.315 response: 00:22:08.315 { 00:22:08.315 "code": -114, 00:22:08.315 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:08.315 } 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.315 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.315 request: 00:22:08.315 { 00:22:08.315 "name": "NVMe0", 00:22:08.315 "trtype": "tcp", 00:22:08.315 "traddr": "10.0.0.2", 00:22:08.315 "adrfam": "ipv4", 00:22:08.316 "trsvcid": "4420", 00:22:08.316 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:08.575 "hostaddr": "10.0.0.2", 00:22:08.575 "hostsvcid": "60000", 00:22:08.575 "prchk_reftag": false, 00:22:08.575 "prchk_guard": false, 00:22:08.575 "hdgst": false, 00:22:08.575 "ddgst": false, 00:22:08.575 "method": "bdev_nvme_attach_controller", 00:22:08.575 "req_id": 1 00:22:08.575 } 00:22:08.575 Got JSON-RPC error response 00:22:08.575 response: 00:22:08.575 { 00:22:08.575 "code": -114, 00:22:08.575 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:08.575 } 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.575 request: 00:22:08.575 { 00:22:08.575 "name": "NVMe0", 00:22:08.575 "trtype": "tcp", 00:22:08.575 "traddr": "10.0.0.2", 00:22:08.575 "adrfam": "ipv4", 00:22:08.575 "trsvcid": "4420", 00:22:08.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.575 "hostaddr": "10.0.0.2", 00:22:08.575 "hostsvcid": "60000", 00:22:08.575 "prchk_reftag": false, 00:22:08.575 "prchk_guard": false, 00:22:08.575 "hdgst": false, 00:22:08.575 "ddgst": false, 00:22:08.575 "multipath": "disable", 00:22:08.575 "method": "bdev_nvme_attach_controller", 00:22:08.575 "req_id": 1 00:22:08.575 } 00:22:08.575 Got JSON-RPC error response 00:22:08.575 response: 00:22:08.575 { 00:22:08.575 "code": -114, 00:22:08.575 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:08.575 } 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.575 request: 00:22:08.575 { 00:22:08.575 "name": "NVMe0", 00:22:08.575 "trtype": "tcp", 00:22:08.575 "traddr": "10.0.0.2", 00:22:08.575 "adrfam": "ipv4", 00:22:08.575 "trsvcid": "4420", 00:22:08.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.575 "hostaddr": "10.0.0.2", 00:22:08.575 "hostsvcid": "60000", 00:22:08.575 "prchk_reftag": false, 00:22:08.575 "prchk_guard": false, 00:22:08.575 "hdgst": false, 00:22:08.575 "ddgst": false, 00:22:08.575 "multipath": "failover", 00:22:08.575 "method": "bdev_nvme_attach_controller", 00:22:08.575 "req_id": 1 00:22:08.575 } 00:22:08.575 Got JSON-RPC error response 00:22:08.575 response: 00:22:08.575 { 00:22:08.575 "code": -114, 00:22:08.575 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:08.575 } 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.575 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.575 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.835 00:22:08.835 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.835 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.835 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:08.835 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.835 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:08.835 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.835 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:08.835 19:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:10.214 0 00:22:10.214 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1594053 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1594053 ']' 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1594053 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1594053 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1594053' 00:22:10.215 killing process with pid 1594053 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1594053 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1594053 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:10.215 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:10.215 [2024-07-24 19:22:53.569182] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:10.215 [2024-07-24 19:22:53.569237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594053 ] 00:22:10.215 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.215 [2024-07-24 19:22:53.639640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.215 [2024-07-24 19:22:53.710583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.215 [2024-07-24 19:22:54.911459] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 96038c58-2b92-4d3b-9bfe-c002372e0fcd already exists 00:22:10.215 [2024-07-24 19:22:54.911490] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:96038c58-2b92-4d3b-9bfe-c002372e0fcd alias for bdev NVMe1n1 00:22:10.215 [2024-07-24 19:22:54.911501] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:10.215 Running I/O for 1 seconds... 00:22:10.215 00:22:10.215 Latency(us) 00:22:10.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.215 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:10.215 NVMe0n1 : 1.01 24271.79 94.81 0.00 0.00 5257.61 4037.02 16672.36 00:22:10.215 =================================================================================================================== 00:22:10.215 Total : 24271.79 94.81 0.00 0.00 5257.61 4037.02 16672.36 00:22:10.215 Received shutdown signal, test time was about 1.000000 seconds 00:22:10.215 00:22:10.215 Latency(us) 00:22:10.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.215 =================================================================================================================== 00:22:10.215 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.215 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:10.215 rmmod nvme_tcp 00:22:10.215 rmmod nvme_fabrics 00:22:10.215 rmmod nvme_keyring 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1593780 ']' 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1593780 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1593780 ']' 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1593780 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:10.215 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1593780 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1593780' 00:22:10.475 killing process with pid 1593780 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1593780 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1593780 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.475 19:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:13.015 00:22:13.015 real 0m13.180s 00:22:13.015 user 0m16.670s 00:22:13.015 sys 0m6.165s 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.015 ************************************ 00:22:13.015 END TEST nvmf_multicontroller 00:22:13.015 ************************************ 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.015 ************************************ 00:22:13.015 START TEST nvmf_aer 00:22:13.015 ************************************ 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:13.015 * Looking for test storage... 00:22:13.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.015 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:13.016 19:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.587 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:19.588 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:19.588 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:19.588 Found net devices under 0000:af:00.0: cvl_0_0 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:19.588 Found net devices under 0000:af:00.1: cvl_0_1 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.588 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.848 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.848 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.848 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.848 19:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:22:19.848 00:22:19.848 --- 10.0.0.2 ping statistics --- 00:22:19.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.848 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:22:19.848 00:22:19.848 --- 10.0.0.1 ping statistics --- 00:22:19.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.848 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.848 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:20.107 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:20.108 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:20.108 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:20.108 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:20.108 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1598273 00:22:20.108 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:20.108 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1598273 00:22:20.108 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1598273 ']' 00:22:20.108 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.108 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:20.108 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.108 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:20.108 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:20.108 [2024-07-24 19:23:06.164872] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:20.108 [2024-07-24 19:23:06.164917] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.108 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.108 [2024-07-24 19:23:06.237710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:20.108 [2024-07-24 19:23:06.306794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.108 [2024-07-24 19:23:06.306836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.108 [2024-07-24 19:23:06.306845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.108 [2024-07-24 19:23:06.306853] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.108 [2024-07-24 19:23:06.306860] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.108 [2024-07-24 19:23:06.306912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.108 [2024-07-24 19:23:06.307010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.108 [2024-07-24 19:23:06.307093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:20.108 [2024-07-24 19:23:06.307095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.046 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.046 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:22:21.046 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:21.046 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:21.046 19:23:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.046 [2024-07-24 19:23:07.022147] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.046 Malloc0 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.046 [2024-07-24 19:23:07.076642] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.046 [ 00:22:21.046 { 00:22:21.046 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:21.046 "subtype": "Discovery", 00:22:21.046 "listen_addresses": [], 00:22:21.046 "allow_any_host": true, 00:22:21.046 "hosts": [] 00:22:21.046 }, 00:22:21.046 { 00:22:21.046 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.046 "subtype": "NVMe", 00:22:21.046 "listen_addresses": [ 00:22:21.046 { 00:22:21.046 "trtype": "TCP", 00:22:21.046 "adrfam": "IPv4", 00:22:21.046 "traddr": "10.0.0.2", 00:22:21.046 "trsvcid": "4420" 00:22:21.046 } 00:22:21.046 ], 00:22:21.046 "allow_any_host": true, 00:22:21.046 "hosts": [], 00:22:21.046 "serial_number": "SPDK00000000000001", 00:22:21.046 "model_number": "SPDK bdev Controller", 00:22:21.046 "max_namespaces": 2, 00:22:21.046 "min_cntlid": 1, 00:22:21.046 "max_cntlid": 65519, 00:22:21.046 "namespaces": [ 00:22:21.046 { 00:22:21.046 "nsid": 1, 00:22:21.046 "bdev_name": "Malloc0", 00:22:21.046 "name": "Malloc0", 00:22:21.046 "nguid": "2F182B51943D40CA85544CE9851C9699", 00:22:21.046 "uuid": "2f182b51-943d-40ca-8554-4ce9851c9699" 00:22:21.046 } 00:22:21.046 ] 00:22:21.046 } 00:22:21.046 ] 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1598422 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:21.046 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:21.046 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.306 Malloc1 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.306 Asynchronous Event Request test 00:22:21.306 Attaching to 10.0.0.2 00:22:21.306 Attached to 10.0.0.2 00:22:21.306 Registering asynchronous event callbacks... 00:22:21.306 Starting namespace attribute notice tests for all controllers... 00:22:21.306 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:21.306 aer_cb - Changed Namespace 00:22:21.306 Cleaning up... 00:22:21.306 [ 00:22:21.306 { 00:22:21.306 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:21.306 "subtype": "Discovery", 00:22:21.306 "listen_addresses": [], 00:22:21.306 "allow_any_host": true, 00:22:21.306 "hosts": [] 00:22:21.306 }, 00:22:21.306 { 00:22:21.306 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.306 "subtype": "NVMe", 00:22:21.306 "listen_addresses": [ 00:22:21.306 { 00:22:21.306 "trtype": "TCP", 00:22:21.306 "adrfam": "IPv4", 00:22:21.306 "traddr": "10.0.0.2", 00:22:21.306 "trsvcid": "4420" 00:22:21.306 } 00:22:21.306 ], 00:22:21.306 "allow_any_host": true, 00:22:21.306 "hosts": [], 00:22:21.306 "serial_number": "SPDK00000000000001", 00:22:21.306 "model_number": "SPDK bdev Controller", 00:22:21.306 "max_namespaces": 2, 00:22:21.306 "min_cntlid": 1, 00:22:21.306 "max_cntlid": 65519, 00:22:21.306 "namespaces": [ 00:22:21.306 { 00:22:21.306 "nsid": 1, 00:22:21.306 "bdev_name": "Malloc0", 00:22:21.306 "name": "Malloc0", 00:22:21.306 "nguid": "2F182B51943D40CA85544CE9851C9699", 00:22:21.306 "uuid": "2f182b51-943d-40ca-8554-4ce9851c9699" 00:22:21.306 }, 00:22:21.306 { 00:22:21.306 "nsid": 2, 00:22:21.306 "bdev_name": "Malloc1", 00:22:21.306 "name": "Malloc1", 00:22:21.306 "nguid": "E03DD7471C274AD7A0FF663768F24F1D", 00:22:21.306 "uuid": "e03dd747-1c27-4ad7-a0ff-663768f24f1d" 00:22:21.306 } 00:22:21.306 ] 00:22:21.306 } 00:22:21.306 ] 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1598422 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:21.306 rmmod nvme_tcp 00:22:21.306 rmmod nvme_fabrics 00:22:21.306 rmmod nvme_keyring 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1598273 ']' 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1598273 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1598273 ']' 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1598273 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.306 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1598273 00:22:21.565 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:21.565 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:21.565 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1598273' 00:22:21.565 killing process with pid 1598273 00:22:21.565 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1598273 00:22:21.566 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1598273 00:22:21.566 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:21.566 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:21.566 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:21.566 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:21.566 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:21.566 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.566 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.566 19:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.103 19:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:24.103 00:22:24.103 real 0m10.998s 00:22:24.103 user 0m7.634s 00:22:24.103 sys 0m6.025s 00:22:24.103 19:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:24.103 19:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.103 ************************************ 00:22:24.103 END TEST nvmf_aer 00:22:24.103 ************************************ 00:22:24.103 19:23:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:24.103 19:23:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:24.103 19:23:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:24.103 19:23:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.103 ************************************ 00:22:24.103 START TEST nvmf_async_init 00:22:24.103 ************************************ 00:22:24.103 19:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:24.103 * Looking for test storage... 00:22:24.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.103 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8b17c459ab0a426ea0902a18b218babd 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:24.104 19:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:30.728 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:30.728 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.728 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:30.729 Found net devices under 0000:af:00.0: cvl_0_0 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:30.729 Found net devices under 0000:af:00.1: cvl_0_1 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:30.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:22:30.729 00:22:30.729 --- 10.0.0.2 ping statistics --- 00:22:30.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.729 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:22:30.729 00:22:30.729 --- 10.0.0.1 ping statistics --- 00:22:30.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.729 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1602180 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1602180 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1602180 ']' 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.729 19:23:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:30.729 [2024-07-24 19:23:16.717873] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:30.729 [2024-07-24 19:23:16.717920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.729 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.729 [2024-07-24 19:23:16.792345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.729 [2024-07-24 19:23:16.863824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.729 [2024-07-24 19:23:16.863861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.729 [2024-07-24 19:23:16.863870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.729 [2024-07-24 19:23:16.863878] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.729 [2024-07-24 19:23:16.863885] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.729 [2024-07-24 19:23:16.863907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.298 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.298 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:22:31.298 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.298 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.298 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.558 [2024-07-24 19:23:17.558375] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.558 null0 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8b17c459ab0a426ea0902a18b218babd 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.558 [2024-07-24 19:23:17.602598] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.558 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.818 nvme0n1 00:22:31.818 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.818 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:31.818 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.818 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.818 [ 00:22:31.818 { 00:22:31.818 "name": "nvme0n1", 00:22:31.818 "aliases": [ 00:22:31.818 "8b17c459-ab0a-426e-a090-2a18b218babd" 00:22:31.818 ], 00:22:31.818 "product_name": "NVMe disk", 00:22:31.818 "block_size": 512, 00:22:31.818 "num_blocks": 2097152, 00:22:31.818 "uuid": "8b17c459-ab0a-426e-a090-2a18b218babd", 00:22:31.818 "assigned_rate_limits": { 00:22:31.818 "rw_ios_per_sec": 0, 00:22:31.818 "rw_mbytes_per_sec": 0, 00:22:31.818 "r_mbytes_per_sec": 0, 00:22:31.818 "w_mbytes_per_sec": 0 00:22:31.818 }, 00:22:31.818 "claimed": false, 00:22:31.818 "zoned": false, 00:22:31.818 "supported_io_types": { 00:22:31.818 "read": true, 00:22:31.818 "write": true, 00:22:31.818 "unmap": false, 00:22:31.818 "flush": true, 00:22:31.818 "reset": true, 00:22:31.818 "nvme_admin": true, 00:22:31.818 "nvme_io": true, 00:22:31.818 "nvme_io_md": false, 00:22:31.818 "write_zeroes": true, 00:22:31.818 "zcopy": false, 00:22:31.818 "get_zone_info": false, 00:22:31.818 "zone_management": false, 00:22:31.818 "zone_append": false, 00:22:31.818 "compare": true, 00:22:31.818 "compare_and_write": true, 00:22:31.818 "abort": true, 00:22:31.818 "seek_hole": false, 00:22:31.818 "seek_data": false, 00:22:31.818 "copy": true, 00:22:31.818 "nvme_iov_md": false 00:22:31.818 }, 00:22:31.818 "memory_domains": [ 00:22:31.818 { 00:22:31.818 "dma_device_id": "system", 00:22:31.818 "dma_device_type": 1 00:22:31.818 } 00:22:31.818 ], 00:22:31.818 "driver_specific": { 00:22:31.818 "nvme": [ 00:22:31.818 { 00:22:31.818 "trid": { 00:22:31.818 "trtype": "TCP", 00:22:31.818 "adrfam": "IPv4", 00:22:31.818 "traddr": "10.0.0.2", 00:22:31.818 "trsvcid": "4420", 00:22:31.818 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:31.818 }, 00:22:31.818 "ctrlr_data": { 00:22:31.818 "cntlid": 1, 00:22:31.818 "vendor_id": "0x8086", 00:22:31.818 "model_number": "SPDK bdev Controller", 00:22:31.818 "serial_number": "00000000000000000000", 00:22:31.818 "firmware_revision": "24.09", 00:22:31.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:31.818 "oacs": { 00:22:31.818 "security": 0, 00:22:31.818 "format": 0, 00:22:31.818 "firmware": 0, 00:22:31.818 "ns_manage": 0 00:22:31.818 }, 00:22:31.818 "multi_ctrlr": true, 00:22:31.818 "ana_reporting": false 00:22:31.818 }, 00:22:31.818 "vs": { 00:22:31.818 "nvme_version": "1.3" 00:22:31.818 }, 00:22:31.818 "ns_data": { 00:22:31.818 "id": 1, 00:22:31.818 "can_share": true 00:22:31.818 } 00:22:31.818 } 00:22:31.818 ], 00:22:31.818 "mp_policy": "active_passive" 00:22:31.818 } 00:22:31.818 } 00:22:31.818 ] 00:22:31.818 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.818 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:31.818 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.818 19:23:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.818 [2024-07-24 19:23:17.872106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:31.818 [2024-07-24 19:23:17.872163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2a4d0 (9): Bad file descriptor 00:22:31.818 [2024-07-24 19:23:18.003797] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:31.818 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.818 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:31.818 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.818 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.818 [ 00:22:31.818 { 00:22:31.818 "name": "nvme0n1", 00:22:31.818 "aliases": [ 00:22:31.818 "8b17c459-ab0a-426e-a090-2a18b218babd" 00:22:31.818 ], 00:22:31.818 "product_name": "NVMe disk", 00:22:31.818 "block_size": 512, 00:22:31.818 "num_blocks": 2097152, 00:22:31.818 "uuid": "8b17c459-ab0a-426e-a090-2a18b218babd", 00:22:31.818 "assigned_rate_limits": { 00:22:31.818 "rw_ios_per_sec": 0, 00:22:31.818 "rw_mbytes_per_sec": 0, 00:22:31.818 "r_mbytes_per_sec": 0, 00:22:31.818 "w_mbytes_per_sec": 0 00:22:31.818 }, 00:22:31.818 "claimed": false, 00:22:31.818 "zoned": false, 00:22:31.818 "supported_io_types": { 00:22:31.818 "read": true, 00:22:31.818 "write": true, 00:22:31.818 "unmap": false, 00:22:31.818 "flush": true, 00:22:31.818 "reset": true, 00:22:31.818 "nvme_admin": true, 00:22:31.818 "nvme_io": true, 00:22:31.818 "nvme_io_md": false, 00:22:31.818 "write_zeroes": true, 00:22:31.818 "zcopy": false, 00:22:31.818 "get_zone_info": false, 00:22:31.818 "zone_management": false, 00:22:31.819 "zone_append": false, 00:22:31.819 "compare": true, 00:22:31.819 "compare_and_write": true, 00:22:31.819 "abort": true, 00:22:31.819 "seek_hole": false, 00:22:31.819 "seek_data": false, 00:22:31.819 "copy": true, 00:22:31.819 "nvme_iov_md": false 00:22:31.819 }, 00:22:31.819 "memory_domains": [ 00:22:31.819 { 00:22:31.819 "dma_device_id": "system", 00:22:31.819 "dma_device_type": 1 00:22:31.819 } 00:22:31.819 ], 00:22:31.819 "driver_specific": { 00:22:31.819 "nvme": [ 00:22:31.819 { 00:22:31.819 "trid": { 00:22:31.819 "trtype": "TCP", 00:22:31.819 "adrfam": "IPv4", 00:22:31.819 "traddr": "10.0.0.2", 00:22:31.819 "trsvcid": "4420", 00:22:31.819 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:31.819 }, 00:22:31.819 "ctrlr_data": { 00:22:31.819 "cntlid": 2, 00:22:31.819 "vendor_id": "0x8086", 00:22:31.819 "model_number": "SPDK bdev Controller", 00:22:31.819 "serial_number": "00000000000000000000", 00:22:31.819 "firmware_revision": "24.09", 00:22:31.819 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:31.819 "oacs": { 00:22:31.819 "security": 0, 00:22:31.819 "format": 0, 00:22:31.819 "firmware": 0, 00:22:31.819 "ns_manage": 0 00:22:31.819 }, 00:22:31.819 "multi_ctrlr": true, 00:22:31.819 "ana_reporting": false 00:22:31.819 }, 00:22:31.819 "vs": { 00:22:31.819 "nvme_version": "1.3" 00:22:31.819 }, 00:22:31.819 "ns_data": { 00:22:31.819 "id": 1, 00:22:31.819 "can_share": true 00:22:31.819 } 00:22:31.819 } 00:22:31.819 ], 00:22:31.819 "mp_policy": "active_passive" 00:22:31.819 } 00:22:31.819 } 00:22:31.819 ] 00:22:31.819 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.819 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.819 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.819 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.819 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.819 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.LBZQZjo8cx 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.LBZQZjo8cx 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.079 [2024-07-24 19:23:18.076719] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.079 [2024-07-24 19:23:18.076843] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LBZQZjo8cx 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.079 [2024-07-24 19:23:18.084738] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LBZQZjo8cx 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.079 [2024-07-24 19:23:18.096781] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.079 [2024-07-24 19:23:18.096818] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:32.079 nvme0n1 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.079 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.079 [ 00:22:32.079 { 00:22:32.079 "name": "nvme0n1", 00:22:32.079 "aliases": [ 00:22:32.079 "8b17c459-ab0a-426e-a090-2a18b218babd" 00:22:32.079 ], 00:22:32.079 "product_name": "NVMe disk", 00:22:32.079 "block_size": 512, 00:22:32.079 "num_blocks": 2097152, 00:22:32.079 "uuid": "8b17c459-ab0a-426e-a090-2a18b218babd", 00:22:32.079 "assigned_rate_limits": { 00:22:32.079 "rw_ios_per_sec": 0, 00:22:32.079 "rw_mbytes_per_sec": 0, 00:22:32.079 "r_mbytes_per_sec": 0, 00:22:32.079 "w_mbytes_per_sec": 0 00:22:32.079 }, 00:22:32.079 "claimed": false, 00:22:32.079 "zoned": false, 00:22:32.079 "supported_io_types": { 00:22:32.079 "read": true, 00:22:32.079 "write": true, 00:22:32.079 "unmap": false, 00:22:32.079 "flush": true, 00:22:32.079 "reset": true, 00:22:32.079 "nvme_admin": true, 00:22:32.079 "nvme_io": true, 00:22:32.079 "nvme_io_md": false, 00:22:32.079 "write_zeroes": true, 00:22:32.079 "zcopy": false, 00:22:32.079 "get_zone_info": false, 00:22:32.079 "zone_management": false, 00:22:32.079 "zone_append": false, 00:22:32.079 "compare": true, 00:22:32.079 "compare_and_write": true, 00:22:32.079 "abort": true, 00:22:32.079 "seek_hole": false, 00:22:32.079 "seek_data": false, 00:22:32.079 "copy": true, 00:22:32.079 "nvme_iov_md": false 00:22:32.079 }, 00:22:32.079 "memory_domains": [ 00:22:32.079 { 00:22:32.079 "dma_device_id": "system", 00:22:32.079 "dma_device_type": 1 00:22:32.079 } 00:22:32.079 ], 00:22:32.079 "driver_specific": { 00:22:32.079 "nvme": [ 00:22:32.079 { 00:22:32.079 "trid": { 00:22:32.079 "trtype": "TCP", 00:22:32.079 "adrfam": "IPv4", 00:22:32.079 "traddr": "10.0.0.2", 00:22:32.079 "trsvcid": "4421", 00:22:32.079 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:32.080 }, 00:22:32.080 "ctrlr_data": { 00:22:32.080 "cntlid": 3, 00:22:32.080 "vendor_id": "0x8086", 00:22:32.080 "model_number": "SPDK bdev Controller", 00:22:32.080 "serial_number": "00000000000000000000", 00:22:32.080 "firmware_revision": "24.09", 00:22:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:32.080 "oacs": { 00:22:32.080 "security": 0, 00:22:32.080 "format": 0, 00:22:32.080 "firmware": 0, 00:22:32.080 "ns_manage": 0 00:22:32.080 }, 00:22:32.080 "multi_ctrlr": true, 00:22:32.080 "ana_reporting": false 00:22:32.080 }, 00:22:32.080 "vs": { 00:22:32.080 "nvme_version": "1.3" 00:22:32.080 }, 00:22:32.080 "ns_data": { 00:22:32.080 "id": 1, 00:22:32.080 "can_share": true 00:22:32.080 } 00:22:32.080 } 00:22:32.080 ], 00:22:32.080 "mp_policy": "active_passive" 00:22:32.080 } 00:22:32.080 } 00:22:32.080 ] 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.LBZQZjo8cx 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.080 rmmod nvme_tcp 00:22:32.080 rmmod nvme_fabrics 00:22:32.080 rmmod nvme_keyring 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1602180 ']' 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1602180 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1602180 ']' 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1602180 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.080 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1602180 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1602180' 00:22:32.339 killing process with pid 1602180 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1602180 00:22:32.339 [2024-07-24 19:23:18.334491] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:32.339 [2024-07-24 19:23:18.334517] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1602180 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.339 19:23:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.878 00:22:34.878 real 0m10.670s 00:22:34.878 user 0m3.824s 00:22:34.878 sys 0m5.484s 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.878 ************************************ 00:22:34.878 END TEST nvmf_async_init 00:22:34.878 ************************************ 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.878 ************************************ 00:22:34.878 START TEST dma 00:22:34.878 ************************************ 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:34.878 * Looking for test storage... 00:22:34.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.878 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:34.879 00:22:34.879 real 0m0.128s 00:22:34.879 user 0m0.055s 00:22:34.879 sys 0m0.081s 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:34.879 ************************************ 00:22:34.879 END TEST dma 00:22:34.879 ************************************ 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.879 ************************************ 00:22:34.879 START TEST nvmf_identify 00:22:34.879 ************************************ 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:34.879 * Looking for test storage... 00:22:34.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.879 19:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:34.879 19:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:41.453 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:41.453 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:41.453 Found net devices under 0000:af:00.0: cvl_0_0 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:41.453 Found net devices under 0000:af:00.1: cvl_0_1 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.453 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:41.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:22:41.454 00:22:41.454 --- 10.0.0.2 ping statistics --- 00:22:41.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.454 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:22:41.454 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:22:41.454 00:22:41.454 --- 10.0.0.1 ping statistics --- 00:22:41.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.454 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:22:41.454 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.454 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:41.454 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:41.454 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.454 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:41.454 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:41.454 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.454 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:41.454 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1606156 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1606156 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1606156 ']' 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.713 19:23:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:41.713 [2024-07-24 19:23:27.755571] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:41.713 [2024-07-24 19:23:27.755623] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.713 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.713 [2024-07-24 19:23:27.833249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:41.713 [2024-07-24 19:23:27.909531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.713 [2024-07-24 19:23:27.909574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.713 [2024-07-24 19:23:27.909585] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.713 [2024-07-24 19:23:27.909594] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.713 [2024-07-24 19:23:27.909601] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.713 [2024-07-24 19:23:27.909867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.713 [2024-07-24 19:23:27.909888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.713 [2024-07-24 19:23:27.909998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:41.713 [2024-07-24 19:23:27.910000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.653 [2024-07-24 19:23:28.567753] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.653 Malloc0 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.653 [2024-07-24 19:23:28.662602] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.653 [ 00:22:42.653 { 00:22:42.653 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:42.653 "subtype": "Discovery", 00:22:42.653 "listen_addresses": [ 00:22:42.653 { 00:22:42.653 "trtype": "TCP", 00:22:42.653 "adrfam": "IPv4", 00:22:42.653 "traddr": "10.0.0.2", 00:22:42.653 "trsvcid": "4420" 00:22:42.653 } 00:22:42.653 ], 00:22:42.653 "allow_any_host": true, 00:22:42.653 "hosts": [] 00:22:42.653 }, 00:22:42.653 { 00:22:42.653 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.653 "subtype": "NVMe", 00:22:42.653 "listen_addresses": [ 00:22:42.653 { 00:22:42.653 "trtype": "TCP", 00:22:42.653 "adrfam": "IPv4", 00:22:42.653 "traddr": "10.0.0.2", 00:22:42.653 "trsvcid": "4420" 00:22:42.653 } 00:22:42.653 ], 00:22:42.653 "allow_any_host": true, 00:22:42.653 "hosts": [], 00:22:42.653 "serial_number": "SPDK00000000000001", 00:22:42.653 "model_number": "SPDK bdev Controller", 00:22:42.653 "max_namespaces": 32, 00:22:42.653 "min_cntlid": 1, 00:22:42.653 "max_cntlid": 65519, 00:22:42.653 "namespaces": [ 00:22:42.653 { 00:22:42.653 "nsid": 1, 00:22:42.653 "bdev_name": "Malloc0", 00:22:42.653 "name": "Malloc0", 00:22:42.653 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:42.653 "eui64": "ABCDEF0123456789", 00:22:42.653 "uuid": "d4f23391-0cf3-4f95-8f99-579b629718ac" 00:22:42.653 } 00:22:42.653 ] 00:22:42.653 } 00:22:42.653 ] 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.653 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:42.653 [2024-07-24 19:23:28.722186] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:42.653 [2024-07-24 19:23:28.722226] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606265 ] 00:22:42.653 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.653 [2024-07-24 19:23:28.752078] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:42.653 [2024-07-24 19:23:28.752126] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:42.653 [2024-07-24 19:23:28.752132] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:42.653 [2024-07-24 19:23:28.752145] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:42.653 [2024-07-24 19:23:28.752155] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:42.653 [2024-07-24 19:23:28.752578] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:42.653 [2024-07-24 19:23:28.752605] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14b9f00 0 00:22:42.653 [2024-07-24 19:23:28.765721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:42.653 [2024-07-24 19:23:28.765739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:42.653 [2024-07-24 19:23:28.765745] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:42.653 [2024-07-24 19:23:28.765749] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:42.653 [2024-07-24 19:23:28.765793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.653 [2024-07-24 19:23:28.765801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.653 [2024-07-24 19:23:28.765806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b9f00) 00:22:42.653 [2024-07-24 19:23:28.765820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:42.653 [2024-07-24 19:23:28.765837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1524e40, cid 0, qid 0 00:22:42.653 [2024-07-24 19:23:28.772723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.653 [2024-07-24 19:23:28.772732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.653 [2024-07-24 19:23:28.772737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.653 [2024-07-24 19:23:28.772742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1524e40) on tqpair=0x14b9f00 00:22:42.653 [2024-07-24 19:23:28.772755] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:42.653 [2024-07-24 19:23:28.772762] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:42.653 [2024-07-24 19:23:28.772769] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:42.653 [2024-07-24 19:23:28.772784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.653 [2024-07-24 19:23:28.772789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.653 [2024-07-24 19:23:28.772794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b9f00) 00:22:42.653 [2024-07-24 19:23:28.772802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.653 [2024-07-24 19:23:28.772817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1524e40, cid 0, qid 0 00:22:42.653 [2024-07-24 19:23:28.773022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.653 [2024-07-24 19:23:28.773029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.653 [2024-07-24 19:23:28.773034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.653 [2024-07-24 19:23:28.773039] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1524e40) on tqpair=0x14b9f00 00:22:42.653 [2024-07-24 19:23:28.773047] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:42.653 [2024-07-24 19:23:28.773057] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:42.653 [2024-07-24 19:23:28.773065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.653 [2024-07-24 19:23:28.773070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.653 [2024-07-24 19:23:28.773075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b9f00) 00:22:42.653 [2024-07-24 19:23:28.773082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.653 [2024-07-24 19:23:28.773095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1524e40, cid 0, qid 0 00:22:42.654 [2024-07-24 19:23:28.773182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.654 [2024-07-24 19:23:28.773188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.654 [2024-07-24 19:23:28.773193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1524e40) on tqpair=0x14b9f00 00:22:42.654 [2024-07-24 19:23:28.773203] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:42.654 [2024-07-24 19:23:28.773213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:42.654 [2024-07-24 19:23:28.773220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b9f00) 00:22:42.654 [2024-07-24 19:23:28.773237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.654 [2024-07-24 19:23:28.773248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1524e40, cid 0, qid 0 00:22:42.654 [2024-07-24 19:23:28.773409] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.654 [2024-07-24 19:23:28.773416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.654 [2024-07-24 19:23:28.773420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1524e40) on tqpair=0x14b9f00 00:22:42.654 [2024-07-24 19:23:28.773433] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:42.654 [2024-07-24 19:23:28.773444] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b9f00) 00:22:42.654 [2024-07-24 19:23:28.773462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.654 [2024-07-24 19:23:28.773474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1524e40, cid 0, qid 0 00:22:42.654 [2024-07-24 19:23:28.773557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.654 [2024-07-24 19:23:28.773564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.654 [2024-07-24 19:23:28.773568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773573] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1524e40) on tqpair=0x14b9f00 00:22:42.654 [2024-07-24 19:23:28.773578] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:42.654 [2024-07-24 19:23:28.773585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:42.654 [2024-07-24 19:23:28.773594] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:42.654 [2024-07-24 19:23:28.773700] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:42.654 [2024-07-24 19:23:28.773707] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:42.654 [2024-07-24 19:23:28.773721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b9f00) 00:22:42.654 [2024-07-24 19:23:28.773738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.654 [2024-07-24 19:23:28.773750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1524e40, cid 0, qid 0 00:22:42.654 [2024-07-24 19:23:28.773835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.654 [2024-07-24 19:23:28.773841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.654 [2024-07-24 19:23:28.773846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1524e40) on tqpair=0x14b9f00 00:22:42.654 [2024-07-24 19:23:28.773856] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:42.654 [2024-07-24 19:23:28.773867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773872] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b9f00) 00:22:42.654 [2024-07-24 19:23:28.773883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.654 [2024-07-24 19:23:28.773894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1524e40, cid 0, qid 0 00:22:42.654 [2024-07-24 19:23:28.773976] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.654 [2024-07-24 19:23:28.773983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.654 [2024-07-24 19:23:28.773987] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.773994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1524e40) on tqpair=0x14b9f00 00:22:42.654 [2024-07-24 19:23:28.773999] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:42.654 [2024-07-24 19:23:28.774005] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:42.654 [2024-07-24 19:23:28.774015] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:42.654 [2024-07-24 19:23:28.774024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:42.654 [2024-07-24 19:23:28.774035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.774040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b9f00) 00:22:42.654 [2024-07-24 19:23:28.774047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.654 [2024-07-24 19:23:28.774058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1524e40, cid 0, qid 0 00:22:42.654 [2024-07-24 19:23:28.774178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.654 [2024-07-24 19:23:28.774185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.654 [2024-07-24 19:23:28.774190] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.774195] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14b9f00): datao=0, datal=4096, cccid=0 00:22:42.654 [2024-07-24 19:23:28.774201] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1524e40) on tqpair(0x14b9f00): expected_datao=0, payload_size=4096 00:22:42.654 [2024-07-24 19:23:28.774207] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.774286] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.774292] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.815873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.654 [2024-07-24 19:23:28.815886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.654 [2024-07-24 19:23:28.815891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.815896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1524e40) on tqpair=0x14b9f00 00:22:42.654 [2024-07-24 19:23:28.815906] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:42.654 [2024-07-24 19:23:28.815913] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:42.654 [2024-07-24 19:23:28.815918] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:42.654 [2024-07-24 19:23:28.815925] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:42.654 [2024-07-24 19:23:28.815931] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:42.654 [2024-07-24 19:23:28.815937] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:42.654 [2024-07-24 19:23:28.815948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:42.654 [2024-07-24 19:23:28.815959] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.815965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.815969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b9f00) 00:22:42.654 [2024-07-24 19:23:28.815980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.654 [2024-07-24 19:23:28.815994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1524e40, cid 0, qid 0 00:22:42.654 [2024-07-24 19:23:28.816080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.654 [2024-07-24 19:23:28.816087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.654 [2024-07-24 19:23:28.816092] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.816096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1524e40) on tqpair=0x14b9f00 00:22:42.654 [2024-07-24 19:23:28.816104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.816109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.816113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14b9f00) 00:22:42.654 [2024-07-24 19:23:28.816120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.654 [2024-07-24 19:23:28.816127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.654 [2024-07-24 19:23:28.816131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14b9f00) 00:22:42.655 [2024-07-24 19:23:28.816142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.655 [2024-07-24 19:23:28.816149] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816154] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14b9f00) 00:22:42.655 [2024-07-24 19:23:28.816165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.655 [2024-07-24 19:23:28.816171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816176] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.655 [2024-07-24 19:23:28.816187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.655 [2024-07-24 19:23:28.816193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:42.655 [2024-07-24 19:23:28.816205] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:42.655 [2024-07-24 19:23:28.816213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14b9f00) 00:22:42.655 [2024-07-24 19:23:28.816224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.655 [2024-07-24 19:23:28.816237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1524e40, cid 0, qid 0 00:22:42.655 [2024-07-24 19:23:28.816243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1524fc0, cid 1, qid 0 00:22:42.655 [2024-07-24 19:23:28.816249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1525140, cid 2, qid 0 00:22:42.655 [2024-07-24 19:23:28.816254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.655 [2024-07-24 19:23:28.816260] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1525440, cid 4, qid 0 00:22:42.655 [2024-07-24 19:23:28.816374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.655 [2024-07-24 19:23:28.816381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.655 [2024-07-24 19:23:28.816387] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1525440) on tqpair=0x14b9f00 00:22:42.655 [2024-07-24 19:23:28.816398] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:42.655 [2024-07-24 19:23:28.816404] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:42.655 [2024-07-24 19:23:28.816417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14b9f00) 00:22:42.655 [2024-07-24 19:23:28.816429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.655 [2024-07-24 19:23:28.816441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1525440, cid 4, qid 0 00:22:42.655 [2024-07-24 19:23:28.816564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.655 [2024-07-24 19:23:28.816572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.655 [2024-07-24 19:23:28.816576] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816581] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14b9f00): datao=0, datal=4096, cccid=4 00:22:42.655 [2024-07-24 19:23:28.816587] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1525440) on tqpair(0x14b9f00): expected_datao=0, payload_size=4096 00:22:42.655 [2024-07-24 19:23:28.816593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816600] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816605] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816696] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.655 [2024-07-24 19:23:28.816703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.655 [2024-07-24 19:23:28.816707] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.816712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1525440) on tqpair=0x14b9f00 00:22:42.655 [2024-07-24 19:23:28.820733] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:42.655 [2024-07-24 19:23:28.820758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.820764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14b9f00) 00:22:42.655 [2024-07-24 19:23:28.820771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.655 [2024-07-24 19:23:28.820779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.820784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.820788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14b9f00) 00:22:42.655 [2024-07-24 19:23:28.820795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.655 [2024-07-24 19:23:28.820811] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1525440, cid 4, qid 0 00:22:42.655 [2024-07-24 19:23:28.820818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15255c0, cid 5, qid 0 00:22:42.655 [2024-07-24 19:23:28.821020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.655 [2024-07-24 19:23:28.821027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.655 [2024-07-24 19:23:28.821032] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.821037] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14b9f00): datao=0, datal=1024, cccid=4 00:22:42.655 [2024-07-24 19:23:28.821044] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1525440) on tqpair(0x14b9f00): expected_datao=0, payload_size=1024 00:22:42.655 [2024-07-24 19:23:28.821050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.821057] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.821062] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.821068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.655 [2024-07-24 19:23:28.821074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.655 [2024-07-24 19:23:28.821079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.821084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15255c0) on tqpair=0x14b9f00 00:22:42.655 [2024-07-24 19:23:28.862888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.655 [2024-07-24 19:23:28.862900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.655 [2024-07-24 19:23:28.862905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.862910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1525440) on tqpair=0x14b9f00 00:22:42.655 [2024-07-24 19:23:28.862928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.862933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14b9f00) 00:22:42.655 [2024-07-24 19:23:28.862941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.655 [2024-07-24 19:23:28.862960] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1525440, cid 4, qid 0 00:22:42.655 [2024-07-24 19:23:28.863055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.655 [2024-07-24 19:23:28.863062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.655 [2024-07-24 19:23:28.863067] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.863071] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14b9f00): datao=0, datal=3072, cccid=4 00:22:42.655 [2024-07-24 19:23:28.863077] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1525440) on tqpair(0x14b9f00): expected_datao=0, payload_size=3072 00:22:42.655 [2024-07-24 19:23:28.863083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.863179] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.655 [2024-07-24 19:23:28.863184] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.918 [2024-07-24 19:23:28.907727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.918 [2024-07-24 19:23:28.907738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.918 [2024-07-24 19:23:28.907743] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.918 [2024-07-24 19:23:28.907748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1525440) on tqpair=0x14b9f00 00:22:42.918 [2024-07-24 19:23:28.907759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.918 [2024-07-24 19:23:28.907764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14b9f00) 00:22:42.918 [2024-07-24 19:23:28.907771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.918 [2024-07-24 19:23:28.907789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1525440, cid 4, qid 0 00:22:42.918 [2024-07-24 19:23:28.907956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.918 [2024-07-24 19:23:28.907963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.918 [2024-07-24 19:23:28.907968] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.918 [2024-07-24 19:23:28.907973] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14b9f00): datao=0, datal=8, cccid=4 00:22:42.918 [2024-07-24 19:23:28.907979] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1525440) on tqpair(0x14b9f00): expected_datao=0, payload_size=8 00:22:42.918 [2024-07-24 19:23:28.907988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.918 [2024-07-24 19:23:28.907996] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.918 [2024-07-24 19:23:28.908001] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.918 [2024-07-24 19:23:28.949724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.918 [2024-07-24 19:23:28.949736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.918 [2024-07-24 19:23:28.949740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.918 [2024-07-24 19:23:28.949746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1525440) on tqpair=0x14b9f00 00:22:42.918 ===================================================== 00:22:42.918 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:42.918 ===================================================== 00:22:42.918 Controller Capabilities/Features 00:22:42.918 ================================ 00:22:42.918 Vendor ID: 0000 00:22:42.918 Subsystem Vendor ID: 0000 00:22:42.918 Serial Number: .................... 00:22:42.918 Model Number: ........................................ 00:22:42.918 Firmware Version: 24.09 00:22:42.918 Recommended Arb Burst: 0 00:22:42.918 IEEE OUI Identifier: 00 00 00 00:22:42.918 Multi-path I/O 00:22:42.918 May have multiple subsystem ports: No 00:22:42.918 May have multiple controllers: No 00:22:42.918 Associated with SR-IOV VF: No 00:22:42.918 Max Data Transfer Size: 131072 00:22:42.918 Max Number of Namespaces: 0 00:22:42.918 Max Number of I/O Queues: 1024 00:22:42.918 NVMe Specification Version (VS): 1.3 00:22:42.918 NVMe Specification Version (Identify): 1.3 00:22:42.918 Maximum Queue Entries: 128 00:22:42.918 Contiguous Queues Required: Yes 00:22:42.918 Arbitration Mechanisms Supported 00:22:42.918 Weighted Round Robin: Not Supported 00:22:42.918 Vendor Specific: Not Supported 00:22:42.918 Reset Timeout: 15000 ms 00:22:42.918 Doorbell Stride: 4 bytes 00:22:42.918 NVM Subsystem Reset: Not Supported 00:22:42.918 Command Sets Supported 00:22:42.918 NVM Command Set: Supported 00:22:42.918 Boot Partition: Not Supported 00:22:42.918 Memory Page Size Minimum: 4096 bytes 00:22:42.918 Memory Page Size Maximum: 4096 bytes 00:22:42.918 Persistent Memory Region: Not Supported 00:22:42.918 Optional Asynchronous Events Supported 00:22:42.918 Namespace Attribute Notices: Not Supported 00:22:42.918 Firmware Activation Notices: Not Supported 00:22:42.918 ANA Change Notices: Not Supported 00:22:42.918 PLE Aggregate Log Change Notices: Not Supported 00:22:42.918 LBA Status Info Alert Notices: Not Supported 00:22:42.918 EGE Aggregate Log Change Notices: Not Supported 00:22:42.918 Normal NVM Subsystem Shutdown event: Not Supported 00:22:42.918 Zone Descriptor Change Notices: Not Supported 00:22:42.918 Discovery Log Change Notices: Supported 00:22:42.918 Controller Attributes 00:22:42.918 128-bit Host Identifier: Not Supported 00:22:42.918 Non-Operational Permissive Mode: Not Supported 00:22:42.918 NVM Sets: Not Supported 00:22:42.918 Read Recovery Levels: Not Supported 00:22:42.918 Endurance Groups: Not Supported 00:22:42.918 Predictable Latency Mode: Not Supported 00:22:42.918 Traffic Based Keep ALive: Not Supported 00:22:42.918 Namespace Granularity: Not Supported 00:22:42.918 SQ Associations: Not Supported 00:22:42.918 UUID List: Not Supported 00:22:42.918 Multi-Domain Subsystem: Not Supported 00:22:42.918 Fixed Capacity Management: Not Supported 00:22:42.918 Variable Capacity Management: Not Supported 00:22:42.918 Delete Endurance Group: Not Supported 00:22:42.918 Delete NVM Set: Not Supported 00:22:42.918 Extended LBA Formats Supported: Not Supported 00:22:42.918 Flexible Data Placement Supported: Not Supported 00:22:42.918 00:22:42.918 Controller Memory Buffer Support 00:22:42.918 ================================ 00:22:42.918 Supported: No 00:22:42.918 00:22:42.918 Persistent Memory Region Support 00:22:42.918 ================================ 00:22:42.918 Supported: No 00:22:42.918 00:22:42.918 Admin Command Set Attributes 00:22:42.918 ============================ 00:22:42.918 Security Send/Receive: Not Supported 00:22:42.918 Format NVM: Not Supported 00:22:42.918 Firmware Activate/Download: Not Supported 00:22:42.918 Namespace Management: Not Supported 00:22:42.918 Device Self-Test: Not Supported 00:22:42.918 Directives: Not Supported 00:22:42.918 NVMe-MI: Not Supported 00:22:42.918 Virtualization Management: Not Supported 00:22:42.918 Doorbell Buffer Config: Not Supported 00:22:42.918 Get LBA Status Capability: Not Supported 00:22:42.918 Command & Feature Lockdown Capability: Not Supported 00:22:42.918 Abort Command Limit: 1 00:22:42.918 Async Event Request Limit: 4 00:22:42.918 Number of Firmware Slots: N/A 00:22:42.918 Firmware Slot 1 Read-Only: N/A 00:22:42.918 Firmware Activation Without Reset: N/A 00:22:42.918 Multiple Update Detection Support: N/A 00:22:42.918 Firmware Update Granularity: No Information Provided 00:22:42.918 Per-Namespace SMART Log: No 00:22:42.918 Asymmetric Namespace Access Log Page: Not Supported 00:22:42.918 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:42.918 Command Effects Log Page: Not Supported 00:22:42.918 Get Log Page Extended Data: Supported 00:22:42.918 Telemetry Log Pages: Not Supported 00:22:42.919 Persistent Event Log Pages: Not Supported 00:22:42.919 Supported Log Pages Log Page: May Support 00:22:42.919 Commands Supported & Effects Log Page: Not Supported 00:22:42.919 Feature Identifiers & Effects Log Page:May Support 00:22:42.919 NVMe-MI Commands & Effects Log Page: May Support 00:22:42.919 Data Area 4 for Telemetry Log: Not Supported 00:22:42.919 Error Log Page Entries Supported: 128 00:22:42.919 Keep Alive: Not Supported 00:22:42.919 00:22:42.919 NVM Command Set Attributes 00:22:42.919 ========================== 00:22:42.919 Submission Queue Entry Size 00:22:42.919 Max: 1 00:22:42.919 Min: 1 00:22:42.919 Completion Queue Entry Size 00:22:42.919 Max: 1 00:22:42.919 Min: 1 00:22:42.919 Number of Namespaces: 0 00:22:42.919 Compare Command: Not Supported 00:22:42.919 Write Uncorrectable Command: Not Supported 00:22:42.919 Dataset Management Command: Not Supported 00:22:42.919 Write Zeroes Command: Not Supported 00:22:42.919 Set Features Save Field: Not Supported 00:22:42.919 Reservations: Not Supported 00:22:42.919 Timestamp: Not Supported 00:22:42.919 Copy: Not Supported 00:22:42.919 Volatile Write Cache: Not Present 00:22:42.919 Atomic Write Unit (Normal): 1 00:22:42.919 Atomic Write Unit (PFail): 1 00:22:42.919 Atomic Compare & Write Unit: 1 00:22:42.919 Fused Compare & Write: Supported 00:22:42.919 Scatter-Gather List 00:22:42.919 SGL Command Set: Supported 00:22:42.919 SGL Keyed: Supported 00:22:42.919 SGL Bit Bucket Descriptor: Not Supported 00:22:42.919 SGL Metadata Pointer: Not Supported 00:22:42.919 Oversized SGL: Not Supported 00:22:42.919 SGL Metadata Address: Not Supported 00:22:42.919 SGL Offset: Supported 00:22:42.919 Transport SGL Data Block: Not Supported 00:22:42.919 Replay Protected Memory Block: Not Supported 00:22:42.919 00:22:42.919 Firmware Slot Information 00:22:42.919 ========================= 00:22:42.919 Active slot: 0 00:22:42.919 00:22:42.919 00:22:42.919 Error Log 00:22:42.919 ========= 00:22:42.919 00:22:42.919 Active Namespaces 00:22:42.919 ================= 00:22:42.919 Discovery Log Page 00:22:42.919 ================== 00:22:42.919 Generation Counter: 2 00:22:42.919 Number of Records: 2 00:22:42.919 Record Format: 0 00:22:42.919 00:22:42.919 Discovery Log Entry 0 00:22:42.919 ---------------------- 00:22:42.919 Transport Type: 3 (TCP) 00:22:42.919 Address Family: 1 (IPv4) 00:22:42.919 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:42.919 Entry Flags: 00:22:42.919 Duplicate Returned Information: 1 00:22:42.919 Explicit Persistent Connection Support for Discovery: 1 00:22:42.919 Transport Requirements: 00:22:42.919 Secure Channel: Not Required 00:22:42.919 Port ID: 0 (0x0000) 00:22:42.919 Controller ID: 65535 (0xffff) 00:22:42.919 Admin Max SQ Size: 128 00:22:42.919 Transport Service Identifier: 4420 00:22:42.919 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:42.919 Transport Address: 10.0.0.2 00:22:42.919 Discovery Log Entry 1 00:22:42.919 ---------------------- 00:22:42.919 Transport Type: 3 (TCP) 00:22:42.919 Address Family: 1 (IPv4) 00:22:42.919 Subsystem Type: 2 (NVM Subsystem) 00:22:42.919 Entry Flags: 00:22:42.919 Duplicate Returned Information: 0 00:22:42.919 Explicit Persistent Connection Support for Discovery: 0 00:22:42.919 Transport Requirements: 00:22:42.919 Secure Channel: Not Required 00:22:42.919 Port ID: 0 (0x0000) 00:22:42.919 Controller ID: 65535 (0xffff) 00:22:42.919 Admin Max SQ Size: 128 00:22:42.919 Transport Service Identifier: 4420 00:22:42.919 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:42.919 Transport Address: 10.0.0.2 [2024-07-24 19:23:28.949827] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:42.919 [2024-07-24 19:23:28.949839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1524e40) on tqpair=0x14b9f00 00:22:42.919 [2024-07-24 19:23:28.949845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.919 [2024-07-24 19:23:28.949852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1524fc0) on tqpair=0x14b9f00 00:22:42.919 [2024-07-24 19:23:28.949858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.919 [2024-07-24 19:23:28.949864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1525140) on tqpair=0x14b9f00 00:22:42.919 [2024-07-24 19:23:28.949869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.919 [2024-07-24 19:23:28.949875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.919 [2024-07-24 19:23:28.949881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.919 [2024-07-24 19:23:28.949892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.919 [2024-07-24 19:23:28.949897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.919 [2024-07-24 19:23:28.949902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.919 [2024-07-24 19:23:28.949910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.919 [2024-07-24 19:23:28.949925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.919 [2024-07-24 19:23:28.950035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.919 [2024-07-24 19:23:28.950043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.919 [2024-07-24 19:23:28.950048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.919 [2024-07-24 19:23:28.950053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.919 [2024-07-24 19:23:28.950061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.919 [2024-07-24 19:23:28.950066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.919 [2024-07-24 19:23:28.950070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.919 [2024-07-24 19:23:28.950077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.919 [2024-07-24 19:23:28.950093] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.919 [2024-07-24 19:23:28.950203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.919 [2024-07-24 19:23:28.950211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.919 [2024-07-24 19:23:28.950215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.919 [2024-07-24 19:23:28.950220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.919 [2024-07-24 19:23:28.950226] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:42.919 [2024-07-24 19:23:28.950234] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:42.919 [2024-07-24 19:23:28.950245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.919 [2024-07-24 19:23:28.950250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.919 [2024-07-24 19:23:28.950255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.919 [2024-07-24 19:23:28.950262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.919 [2024-07-24 19:23:28.950273] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.919 [2024-07-24 19:23:28.950371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.919 [2024-07-24 19:23:28.950378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.919 [2024-07-24 19:23:28.950383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.919 [2024-07-24 19:23:28.950388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.919 [2024-07-24 19:23:28.950399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.919 [2024-07-24 19:23:28.950404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.919 [2024-07-24 19:23:28.950408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.919 [2024-07-24 19:23:28.950415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.919 [2024-07-24 19:23:28.950426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.919 [2024-07-24 19:23:28.950507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.920 [2024-07-24 19:23:28.950514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.920 [2024-07-24 19:23:28.950518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.950523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.920 [2024-07-24 19:23:28.950533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.950538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.950543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.920 [2024-07-24 19:23:28.950550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.920 [2024-07-24 19:23:28.950561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.920 [2024-07-24 19:23:28.950645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.920 [2024-07-24 19:23:28.950652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.920 [2024-07-24 19:23:28.950657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.950661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.920 [2024-07-24 19:23:28.950671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.950676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.950680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.920 [2024-07-24 19:23:28.950687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.920 [2024-07-24 19:23:28.950699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.920 [2024-07-24 19:23:28.950782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.920 [2024-07-24 19:23:28.950790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.920 [2024-07-24 19:23:28.950795] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.950801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.920 [2024-07-24 19:23:28.950812] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.950817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.950821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.920 [2024-07-24 19:23:28.950828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.920 [2024-07-24 19:23:28.950840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.920 [2024-07-24 19:23:28.950922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.920 [2024-07-24 19:23:28.950929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.920 [2024-07-24 19:23:28.950934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.950938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.920 [2024-07-24 19:23:28.950948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.950953] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.950958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.920 [2024-07-24 19:23:28.950965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.920 [2024-07-24 19:23:28.950976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.920 [2024-07-24 19:23:28.951060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.920 [2024-07-24 19:23:28.951067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.920 [2024-07-24 19:23:28.951071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.920 [2024-07-24 19:23:28.951086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951091] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.920 [2024-07-24 19:23:28.951102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.920 [2024-07-24 19:23:28.951113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.920 [2024-07-24 19:23:28.951194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.920 [2024-07-24 19:23:28.951201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.920 [2024-07-24 19:23:28.951206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.920 [2024-07-24 19:23:28.951220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.920 [2024-07-24 19:23:28.951237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.920 [2024-07-24 19:23:28.951248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.920 [2024-07-24 19:23:28.951329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.920 [2024-07-24 19:23:28.951336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.920 [2024-07-24 19:23:28.951340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.920 [2024-07-24 19:23:28.951358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951363] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.920 [2024-07-24 19:23:28.951374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.920 [2024-07-24 19:23:28.951386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.920 [2024-07-24 19:23:28.951542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.920 [2024-07-24 19:23:28.951549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.920 [2024-07-24 19:23:28.951553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.920 [2024-07-24 19:23:28.951568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.920 [2024-07-24 19:23:28.951585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.920 [2024-07-24 19:23:28.951596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.920 [2024-07-24 19:23:28.951680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.920 [2024-07-24 19:23:28.951687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.920 [2024-07-24 19:23:28.951691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.920 [2024-07-24 19:23:28.951706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951719] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.920 [2024-07-24 19:23:28.951726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.920 [2024-07-24 19:23:28.951738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.920 [2024-07-24 19:23:28.951816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.920 [2024-07-24 19:23:28.951824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.920 [2024-07-24 19:23:28.951828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.920 [2024-07-24 19:23:28.951842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.920 [2024-07-24 19:23:28.951859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.920 [2024-07-24 19:23:28.951870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.920 [2024-07-24 19:23:28.951949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.920 [2024-07-24 19:23:28.951956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.920 [2024-07-24 19:23:28.951962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.920 [2024-07-24 19:23:28.951976] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.920 [2024-07-24 19:23:28.951983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.951987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.921 [2024-07-24 19:23:28.951994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.921 [2024-07-24 19:23:28.952005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.921 [2024-07-24 19:23:28.952089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.921 [2024-07-24 19:23:28.952096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.921 [2024-07-24 19:23:28.952101] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.921 [2024-07-24 19:23:28.952115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952125] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.921 [2024-07-24 19:23:28.952131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.921 [2024-07-24 19:23:28.952143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.921 [2024-07-24 19:23:28.952224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.921 [2024-07-24 19:23:28.952231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.921 [2024-07-24 19:23:28.952236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.921 [2024-07-24 19:23:28.952250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952255] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952259] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.921 [2024-07-24 19:23:28.952266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.921 [2024-07-24 19:23:28.952278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.921 [2024-07-24 19:23:28.952356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.921 [2024-07-24 19:23:28.952363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.921 [2024-07-24 19:23:28.952368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.921 [2024-07-24 19:23:28.952382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.921 [2024-07-24 19:23:28.952398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.921 [2024-07-24 19:23:28.952409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.921 [2024-07-24 19:23:28.952490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.921 [2024-07-24 19:23:28.952497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.921 [2024-07-24 19:23:28.952501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.921 [2024-07-24 19:23:28.952516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.921 [2024-07-24 19:23:28.952536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.921 [2024-07-24 19:23:28.952547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.921 [2024-07-24 19:23:28.952628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.921 [2024-07-24 19:23:28.952635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.921 [2024-07-24 19:23:28.952639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.921 [2024-07-24 19:23:28.952653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952663] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.921 [2024-07-24 19:23:28.952670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.921 [2024-07-24 19:23:28.952681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.921 [2024-07-24 19:23:28.952769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.921 [2024-07-24 19:23:28.952776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.921 [2024-07-24 19:23:28.952780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.921 [2024-07-24 19:23:28.952795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.921 [2024-07-24 19:23:28.952811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.921 [2024-07-24 19:23:28.952823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.921 [2024-07-24 19:23:28.952907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.921 [2024-07-24 19:23:28.952914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.921 [2024-07-24 19:23:28.952918] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.921 [2024-07-24 19:23:28.952933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.952942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.921 [2024-07-24 19:23:28.952949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.921 [2024-07-24 19:23:28.952960] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.921 [2024-07-24 19:23:28.953109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.921 [2024-07-24 19:23:28.953115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.921 [2024-07-24 19:23:28.953120] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.953125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.921 [2024-07-24 19:23:28.953135] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.953140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.953144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.921 [2024-07-24 19:23:28.953153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.921 [2024-07-24 19:23:28.953164] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.921 [2024-07-24 19:23:28.953247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.921 [2024-07-24 19:23:28.953254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.921 [2024-07-24 19:23:28.953258] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.953263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.921 [2024-07-24 19:23:28.953273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.953278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.953282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.921 [2024-07-24 19:23:28.953289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.921 [2024-07-24 19:23:28.953300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.921 [2024-07-24 19:23:28.953384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.921 [2024-07-24 19:23:28.953391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.921 [2024-07-24 19:23:28.953395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.953400] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.921 [2024-07-24 19:23:28.953410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.953415] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.921 [2024-07-24 19:23:28.953419] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.921 [2024-07-24 19:23:28.953426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.921 [2024-07-24 19:23:28.953437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.921 [2024-07-24 19:23:28.953520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.921 [2024-07-24 19:23:28.953527] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.921 [2024-07-24 19:23:28.953532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:28.953536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.922 [2024-07-24 19:23:28.953546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:28.953551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:28.953555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.922 [2024-07-24 19:23:28.953562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.922 [2024-07-24 19:23:28.953573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.922 [2024-07-24 19:23:28.953654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.922 [2024-07-24 19:23:28.953661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.922 [2024-07-24 19:23:28.953666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:28.953671] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.922 [2024-07-24 19:23:28.953680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:28.953685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:28.953689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.922 [2024-07-24 19:23:28.953696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.922 [2024-07-24 19:23:28.953709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.922 [2024-07-24 19:23:28.957725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.922 [2024-07-24 19:23:28.957733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.922 [2024-07-24 19:23:28.957738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:28.957742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.922 [2024-07-24 19:23:28.957753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:28.957759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:28.957763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14b9f00) 00:22:42.922 [2024-07-24 19:23:28.957770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.922 [2024-07-24 19:23:28.957783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15252c0, cid 3, qid 0 00:22:42.922 [2024-07-24 19:23:28.957947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.922 [2024-07-24 19:23:28.957954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.922 [2024-07-24 19:23:28.957958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:28.957963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15252c0) on tqpair=0x14b9f00 00:22:42.922 [2024-07-24 19:23:28.957972] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:22:42.922 00:22:42.922 19:23:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:42.922 [2024-07-24 19:23:28.996837] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:42.922 [2024-07-24 19:23:28.996875] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606294 ] 00:22:42.922 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.922 [2024-07-24 19:23:29.027775] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:42.922 [2024-07-24 19:23:29.027823] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:42.922 [2024-07-24 19:23:29.027829] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:42.922 [2024-07-24 19:23:29.027841] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:42.922 [2024-07-24 19:23:29.027850] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:42.922 [2024-07-24 19:23:29.028201] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:42.922 [2024-07-24 19:23:29.028223] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1907f00 0 00:22:42.922 [2024-07-24 19:23:29.042721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:42.922 [2024-07-24 19:23:29.042737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:42.922 [2024-07-24 19:23:29.042742] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:42.922 [2024-07-24 19:23:29.042747] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:42.922 [2024-07-24 19:23:29.042780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:29.042789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:29.042794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1907f00) 00:22:42.922 [2024-07-24 19:23:29.042805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:42.922 [2024-07-24 19:23:29.042822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1972e40, cid 0, qid 0 00:22:42.922 [2024-07-24 19:23:29.049725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.922 [2024-07-24 19:23:29.049734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.922 [2024-07-24 19:23:29.049739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:29.049744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1972e40) on tqpair=0x1907f00 00:22:42.922 [2024-07-24 19:23:29.049756] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:42.922 [2024-07-24 19:23:29.049763] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:42.922 [2024-07-24 19:23:29.049770] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:42.922 [2024-07-24 19:23:29.049783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:29.049788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:29.049793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1907f00) 00:22:42.922 [2024-07-24 19:23:29.049801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.922 [2024-07-24 19:23:29.049815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1972e40, cid 0, qid 0 00:22:42.922 [2024-07-24 19:23:29.049998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.922 [2024-07-24 19:23:29.050005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.922 [2024-07-24 19:23:29.050010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:29.050015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1972e40) on tqpair=0x1907f00 00:22:42.922 [2024-07-24 19:23:29.050022] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:42.922 [2024-07-24 19:23:29.050032] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:42.922 [2024-07-24 19:23:29.050040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:29.050045] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:29.050049] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1907f00) 00:22:42.922 [2024-07-24 19:23:29.050056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.922 [2024-07-24 19:23:29.050069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1972e40, cid 0, qid 0 00:22:42.922 [2024-07-24 19:23:29.050156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.922 [2024-07-24 19:23:29.050163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.922 [2024-07-24 19:23:29.050168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:29.050173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1972e40) on tqpair=0x1907f00 00:22:42.922 [2024-07-24 19:23:29.050178] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:42.922 [2024-07-24 19:23:29.050188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:42.922 [2024-07-24 19:23:29.050195] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:29.050200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.922 [2024-07-24 19:23:29.050207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1907f00) 00:22:42.922 [2024-07-24 19:23:29.050214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.922 [2024-07-24 19:23:29.050226] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1972e40, cid 0, qid 0 00:22:42.922 [2024-07-24 19:23:29.050315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.922 [2024-07-24 19:23:29.050322] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.922 [2024-07-24 19:23:29.050327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.050331] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1972e40) on tqpair=0x1907f00 00:22:42.923 [2024-07-24 19:23:29.050337] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:42.923 [2024-07-24 19:23:29.050348] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.050353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.050358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1907f00) 00:22:42.923 [2024-07-24 19:23:29.050365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.923 [2024-07-24 19:23:29.050376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1972e40, cid 0, qid 0 00:22:42.923 [2024-07-24 19:23:29.050459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.923 [2024-07-24 19:23:29.050466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.923 [2024-07-24 19:23:29.050470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.050475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1972e40) on tqpair=0x1907f00 00:22:42.923 [2024-07-24 19:23:29.050480] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:42.923 [2024-07-24 19:23:29.050486] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:42.923 [2024-07-24 19:23:29.050495] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:42.923 [2024-07-24 19:23:29.050602] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:42.923 [2024-07-24 19:23:29.050607] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:42.923 [2024-07-24 19:23:29.050615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.050620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.050625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1907f00) 00:22:42.923 [2024-07-24 19:23:29.050632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.923 [2024-07-24 19:23:29.050644] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1972e40, cid 0, qid 0 00:22:42.923 [2024-07-24 19:23:29.050734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.923 [2024-07-24 19:23:29.050741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.923 [2024-07-24 19:23:29.050745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.050750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1972e40) on tqpair=0x1907f00 00:22:42.923 [2024-07-24 19:23:29.050755] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:42.923 [2024-07-24 19:23:29.050766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.050771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.050778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1907f00) 00:22:42.923 [2024-07-24 19:23:29.050785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.923 [2024-07-24 19:23:29.050797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1972e40, cid 0, qid 0 00:22:42.923 [2024-07-24 19:23:29.050954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.923 [2024-07-24 19:23:29.050961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.923 [2024-07-24 19:23:29.050965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.050970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1972e40) on tqpair=0x1907f00 00:22:42.923 [2024-07-24 19:23:29.050975] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:42.923 [2024-07-24 19:23:29.050981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:42.923 [2024-07-24 19:23:29.050991] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:42.923 [2024-07-24 19:23:29.051004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:42.923 [2024-07-24 19:23:29.051013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.051018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1907f00) 00:22:42.923 [2024-07-24 19:23:29.051025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.923 [2024-07-24 19:23:29.051038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1972e40, cid 0, qid 0 00:22:42.923 [2024-07-24 19:23:29.051156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.923 [2024-07-24 19:23:29.051162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.923 [2024-07-24 19:23:29.051167] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.051172] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1907f00): datao=0, datal=4096, cccid=0 00:22:42.923 [2024-07-24 19:23:29.051177] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1972e40) on tqpair(0x1907f00): expected_datao=0, payload_size=4096 00:22:42.923 [2024-07-24 19:23:29.051183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.051286] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.051291] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.051349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.923 [2024-07-24 19:23:29.051356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.923 [2024-07-24 19:23:29.051360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.051365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1972e40) on tqpair=0x1907f00 00:22:42.923 [2024-07-24 19:23:29.051372] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:42.923 [2024-07-24 19:23:29.051378] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:42.923 [2024-07-24 19:23:29.051384] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:42.923 [2024-07-24 19:23:29.051389] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:42.923 [2024-07-24 19:23:29.051395] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:42.923 [2024-07-24 19:23:29.051400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:42.923 [2024-07-24 19:23:29.051413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:42.923 [2024-07-24 19:23:29.051423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.051428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.923 [2024-07-24 19:23:29.051432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1907f00) 00:22:42.924 [2024-07-24 19:23:29.051439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.924 [2024-07-24 19:23:29.051452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1972e40, cid 0, qid 0 00:22:42.924 [2024-07-24 19:23:29.051544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.924 [2024-07-24 19:23:29.051551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.924 [2024-07-24 19:23:29.051555] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051560] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1972e40) on tqpair=0x1907f00 00:22:42.924 [2024-07-24 19:23:29.051567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1907f00) 00:22:42.924 [2024-07-24 19:23:29.051582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.924 [2024-07-24 19:23:29.051589] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1907f00) 00:22:42.924 [2024-07-24 19:23:29.051605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.924 [2024-07-24 19:23:29.051611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1907f00) 00:22:42.924 [2024-07-24 19:23:29.051627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.924 [2024-07-24 19:23:29.051633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:42.924 [2024-07-24 19:23:29.051649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.924 [2024-07-24 19:23:29.051655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:42.924 [2024-07-24 19:23:29.051666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:42.924 [2024-07-24 19:23:29.051674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1907f00) 00:22:42.924 [2024-07-24 19:23:29.051685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.924 [2024-07-24 19:23:29.051698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1972e40, cid 0, qid 0 00:22:42.924 [2024-07-24 19:23:29.051704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1972fc0, cid 1, qid 0 00:22:42.924 [2024-07-24 19:23:29.051711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1973140, cid 2, qid 0 00:22:42.924 [2024-07-24 19:23:29.051722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:42.924 [2024-07-24 19:23:29.051728] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1973440, cid 4, qid 0 00:22:42.924 [2024-07-24 19:23:29.051837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.924 [2024-07-24 19:23:29.051844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.924 [2024-07-24 19:23:29.051848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1973440) on tqpair=0x1907f00 00:22:42.924 [2024-07-24 19:23:29.051859] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:42.924 [2024-07-24 19:23:29.051865] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:42.924 [2024-07-24 19:23:29.051877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:42.924 [2024-07-24 19:23:29.051884] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:42.924 [2024-07-24 19:23:29.051891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.051901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1907f00) 00:22:42.924 [2024-07-24 19:23:29.051908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.924 [2024-07-24 19:23:29.051920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1973440, cid 4, qid 0 00:22:42.924 [2024-07-24 19:23:29.052007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.924 [2024-07-24 19:23:29.052014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.924 [2024-07-24 19:23:29.052018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.052023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1973440) on tqpair=0x1907f00 00:22:42.924 [2024-07-24 19:23:29.052076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:42.924 [2024-07-24 19:23:29.052087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:42.924 [2024-07-24 19:23:29.052095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.052099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1907f00) 00:22:42.924 [2024-07-24 19:23:29.052106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.924 [2024-07-24 19:23:29.052118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1973440, cid 4, qid 0 00:22:42.924 [2024-07-24 19:23:29.052212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.924 [2024-07-24 19:23:29.052219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.924 [2024-07-24 19:23:29.052224] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.052228] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1907f00): datao=0, datal=4096, cccid=4 00:22:42.924 [2024-07-24 19:23:29.052234] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1973440) on tqpair(0x1907f00): expected_datao=0, payload_size=4096 00:22:42.924 [2024-07-24 19:23:29.052240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.052325] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.052332] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.094723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.924 [2024-07-24 19:23:29.094733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.924 [2024-07-24 19:23:29.094754] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.094759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1973440) on tqpair=0x1907f00 00:22:42.924 [2024-07-24 19:23:29.094770] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:42.924 [2024-07-24 19:23:29.094783] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:42.924 [2024-07-24 19:23:29.094794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:42.924 [2024-07-24 19:23:29.094802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.094807] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1907f00) 00:22:42.924 [2024-07-24 19:23:29.094814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.924 [2024-07-24 19:23:29.094828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1973440, cid 4, qid 0 00:22:42.924 [2024-07-24 19:23:29.095016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.924 [2024-07-24 19:23:29.095024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.924 [2024-07-24 19:23:29.095028] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.095033] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1907f00): datao=0, datal=4096, cccid=4 00:22:42.924 [2024-07-24 19:23:29.095038] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1973440) on tqpair(0x1907f00): expected_datao=0, payload_size=4096 00:22:42.924 [2024-07-24 19:23:29.095044] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.095051] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.095056] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.136864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.924 [2024-07-24 19:23:29.136876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.924 [2024-07-24 19:23:29.136880] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.924 [2024-07-24 19:23:29.136885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1973440) on tqpair=0x1907f00 00:22:42.924 [2024-07-24 19:23:29.136900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:42.924 [2024-07-24 19:23:29.136911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:42.925 [2024-07-24 19:23:29.136921] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.925 [2024-07-24 19:23:29.136926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1907f00) 00:22:42.925 [2024-07-24 19:23:29.136934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.925 [2024-07-24 19:23:29.136947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1973440, cid 4, qid 0 00:22:42.925 [2024-07-24 19:23:29.137039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.925 [2024-07-24 19:23:29.137046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.925 [2024-07-24 19:23:29.137051] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.925 [2024-07-24 19:23:29.137055] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1907f00): datao=0, datal=4096, cccid=4 00:22:42.925 [2024-07-24 19:23:29.137064] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1973440) on tqpair(0x1907f00): expected_datao=0, payload_size=4096 00:22:42.925 [2024-07-24 19:23:29.137070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.925 [2024-07-24 19:23:29.137175] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.925 [2024-07-24 19:23:29.137180] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.187 [2024-07-24 19:23:29.177861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.187 [2024-07-24 19:23:29.177872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.187 [2024-07-24 19:23:29.177877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.187 [2024-07-24 19:23:29.177882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1973440) on tqpair=0x1907f00 00:22:43.187 [2024-07-24 19:23:29.177891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:43.187 [2024-07-24 19:23:29.177901] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:43.187 [2024-07-24 19:23:29.177912] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:43.187 [2024-07-24 19:23:29.177921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:43.187 [2024-07-24 19:23:29.177928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:43.187 [2024-07-24 19:23:29.177934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:43.187 [2024-07-24 19:23:29.177940] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:43.187 [2024-07-24 19:23:29.177946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:43.187 [2024-07-24 19:23:29.177953] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:43.187 [2024-07-24 19:23:29.177967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.187 [2024-07-24 19:23:29.177972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1907f00) 00:22:43.187 [2024-07-24 19:23:29.177979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-07-24 19:23:29.177987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.187 [2024-07-24 19:23:29.177992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.187 [2024-07-24 19:23:29.177996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1907f00) 00:22:43.187 [2024-07-24 19:23:29.178003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.187 [2024-07-24 19:23:29.178018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1973440, cid 4, qid 0 00:22:43.187 [2024-07-24 19:23:29.178023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19735c0, cid 5, qid 0 00:22:43.187 [2024-07-24 19:23:29.178189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.187 [2024-07-24 19:23:29.178195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.187 [2024-07-24 19:23:29.178200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.187 [2024-07-24 19:23:29.178205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1973440) on tqpair=0x1907f00 00:22:43.187 [2024-07-24 19:23:29.178211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.187 [2024-07-24 19:23:29.178218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.187 [2024-07-24 19:23:29.178222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.187 [2024-07-24 19:23:29.178229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19735c0) on tqpair=0x1907f00 00:22:43.187 [2024-07-24 19:23:29.178240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.187 [2024-07-24 19:23:29.178245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1907f00) 00:22:43.187 [2024-07-24 19:23:29.178251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-07-24 19:23:29.178263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19735c0, cid 5, qid 0 00:22:43.187 [2024-07-24 19:23:29.178413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.187 [2024-07-24 19:23:29.178419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.187 [2024-07-24 19:23:29.178424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.187 [2024-07-24 19:23:29.178428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19735c0) on tqpair=0x1907f00 00:22:43.187 [2024-07-24 19:23:29.178439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.187 [2024-07-24 19:23:29.178444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1907f00) 00:22:43.187 [2024-07-24 19:23:29.178450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-07-24 19:23:29.178462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19735c0, cid 5, qid 0 00:22:43.187 [2024-07-24 19:23:29.178546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.187 [2024-07-24 19:23:29.178553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.187 [2024-07-24 19:23:29.178558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.187 [2024-07-24 19:23:29.178562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19735c0) on tqpair=0x1907f00 00:22:43.187 [2024-07-24 19:23:29.178572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.187 [2024-07-24 19:23:29.178577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1907f00) 00:22:43.187 [2024-07-24 19:23:29.178583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-07-24 19:23:29.178594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19735c0, cid 5, qid 0 00:22:43.187 [2024-07-24 19:23:29.178681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.187 [2024-07-24 19:23:29.178688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.187 [2024-07-24 19:23:29.178692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.178697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19735c0) on tqpair=0x1907f00 00:22:43.188 [2024-07-24 19:23:29.178712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.182724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1907f00) 00:22:43.188 [2024-07-24 19:23:29.182732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-07-24 19:23:29.182741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.182746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1907f00) 00:22:43.188 [2024-07-24 19:23:29.182752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-07-24 19:23:29.182760] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.182764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1907f00) 00:22:43.188 [2024-07-24 19:23:29.182771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-07-24 19:23:29.182781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.182786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1907f00) 00:22:43.188 [2024-07-24 19:23:29.182793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-07-24 19:23:29.182807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19735c0, cid 5, qid 0 00:22:43.188 [2024-07-24 19:23:29.182812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1973440, cid 4, qid 0 00:22:43.188 [2024-07-24 19:23:29.182818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1973740, cid 6, qid 0 00:22:43.188 [2024-07-24 19:23:29.182823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19738c0, cid 7, qid 0 00:22:43.188 [2024-07-24 19:23:29.183090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.188 [2024-07-24 19:23:29.183098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.188 [2024-07-24 19:23:29.183103] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183107] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1907f00): datao=0, datal=8192, cccid=5 00:22:43.188 [2024-07-24 19:23:29.183113] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19735c0) on tqpair(0x1907f00): expected_datao=0, payload_size=8192 00:22:43.188 [2024-07-24 19:23:29.183119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183126] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183131] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.188 [2024-07-24 19:23:29.183143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.188 [2024-07-24 19:23:29.183148] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183152] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1907f00): datao=0, datal=512, cccid=4 00:22:43.188 [2024-07-24 19:23:29.183158] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1973440) on tqpair(0x1907f00): expected_datao=0, payload_size=512 00:22:43.188 [2024-07-24 19:23:29.183163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183170] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183174] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.188 [2024-07-24 19:23:29.183186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.188 [2024-07-24 19:23:29.183191] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183195] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1907f00): datao=0, datal=512, cccid=6 00:22:43.188 [2024-07-24 19:23:29.183201] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1973740) on tqpair(0x1907f00): expected_datao=0, payload_size=512 00:22:43.188 [2024-07-24 19:23:29.183207] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183213] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183218] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:43.188 [2024-07-24 19:23:29.183230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:43.188 [2024-07-24 19:23:29.183234] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183239] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1907f00): datao=0, datal=4096, cccid=7 00:22:43.188 [2024-07-24 19:23:29.183244] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19738c0) on tqpair(0x1907f00): expected_datao=0, payload_size=4096 00:22:43.188 [2024-07-24 19:23:29.183252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183259] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183263] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.188 [2024-07-24 19:23:29.183278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.188 [2024-07-24 19:23:29.183283] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19735c0) on tqpair=0x1907f00 00:22:43.188 [2024-07-24 19:23:29.183300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.188 [2024-07-24 19:23:29.183307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.188 [2024-07-24 19:23:29.183311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1973440) on tqpair=0x1907f00 00:22:43.188 [2024-07-24 19:23:29.183326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.188 [2024-07-24 19:23:29.183333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.188 [2024-07-24 19:23:29.183337] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183342] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1973740) on tqpair=0x1907f00 00:22:43.188 [2024-07-24 19:23:29.183349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.188 [2024-07-24 19:23:29.183356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.188 [2024-07-24 19:23:29.183360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.188 [2024-07-24 19:23:29.183365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19738c0) on tqpair=0x1907f00 00:22:43.188 ===================================================== 00:22:43.188 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.188 ===================================================== 00:22:43.188 Controller Capabilities/Features 00:22:43.188 ================================ 00:22:43.188 Vendor ID: 8086 00:22:43.188 Subsystem Vendor ID: 8086 00:22:43.188 Serial Number: SPDK00000000000001 00:22:43.188 Model Number: SPDK bdev Controller 00:22:43.188 Firmware Version: 24.09 00:22:43.188 Recommended Arb Burst: 6 00:22:43.188 IEEE OUI Identifier: e4 d2 5c 00:22:43.188 Multi-path I/O 00:22:43.188 May have multiple subsystem ports: Yes 00:22:43.188 May have multiple controllers: Yes 00:22:43.188 Associated with SR-IOV VF: No 00:22:43.188 Max Data Transfer Size: 131072 00:22:43.188 Max Number of Namespaces: 32 00:22:43.188 Max Number of I/O Queues: 127 00:22:43.188 NVMe Specification Version (VS): 1.3 00:22:43.188 NVMe Specification Version (Identify): 1.3 00:22:43.188 Maximum Queue Entries: 128 00:22:43.188 Contiguous Queues Required: Yes 00:22:43.188 Arbitration Mechanisms Supported 00:22:43.188 Weighted Round Robin: Not Supported 00:22:43.188 Vendor Specific: Not Supported 00:22:43.188 Reset Timeout: 15000 ms 00:22:43.188 Doorbell Stride: 4 bytes 00:22:43.188 NVM Subsystem Reset: Not Supported 00:22:43.188 Command Sets Supported 00:22:43.188 NVM Command Set: Supported 00:22:43.188 Boot Partition: Not Supported 00:22:43.188 Memory Page Size Minimum: 4096 bytes 00:22:43.188 Memory Page Size Maximum: 4096 bytes 00:22:43.188 Persistent Memory Region: Not Supported 00:22:43.188 Optional Asynchronous Events Supported 00:22:43.188 Namespace Attribute Notices: Supported 00:22:43.188 Firmware Activation Notices: Not Supported 00:22:43.188 ANA Change Notices: Not Supported 00:22:43.188 PLE Aggregate Log Change Notices: Not Supported 00:22:43.188 LBA Status Info Alert Notices: Not Supported 00:22:43.188 EGE Aggregate Log Change Notices: Not Supported 00:22:43.188 Normal NVM Subsystem Shutdown event: Not Supported 00:22:43.188 Zone Descriptor Change Notices: Not Supported 00:22:43.188 Discovery Log Change Notices: Not Supported 00:22:43.188 Controller Attributes 00:22:43.188 128-bit Host Identifier: Supported 00:22:43.188 Non-Operational Permissive Mode: Not Supported 00:22:43.188 NVM Sets: Not Supported 00:22:43.188 Read Recovery Levels: Not Supported 00:22:43.188 Endurance Groups: Not Supported 00:22:43.188 Predictable Latency Mode: Not Supported 00:22:43.188 Traffic Based Keep ALive: Not Supported 00:22:43.188 Namespace Granularity: Not Supported 00:22:43.188 SQ Associations: Not Supported 00:22:43.188 UUID List: Not Supported 00:22:43.188 Multi-Domain Subsystem: Not Supported 00:22:43.189 Fixed Capacity Management: Not Supported 00:22:43.189 Variable Capacity Management: Not Supported 00:22:43.189 Delete Endurance Group: Not Supported 00:22:43.189 Delete NVM Set: Not Supported 00:22:43.189 Extended LBA Formats Supported: Not Supported 00:22:43.189 Flexible Data Placement Supported: Not Supported 00:22:43.189 00:22:43.189 Controller Memory Buffer Support 00:22:43.189 ================================ 00:22:43.189 Supported: No 00:22:43.189 00:22:43.189 Persistent Memory Region Support 00:22:43.189 ================================ 00:22:43.189 Supported: No 00:22:43.189 00:22:43.189 Admin Command Set Attributes 00:22:43.189 ============================ 00:22:43.189 Security Send/Receive: Not Supported 00:22:43.189 Format NVM: Not Supported 00:22:43.189 Firmware Activate/Download: Not Supported 00:22:43.189 Namespace Management: Not Supported 00:22:43.189 Device Self-Test: Not Supported 00:22:43.189 Directives: Not Supported 00:22:43.189 NVMe-MI: Not Supported 00:22:43.189 Virtualization Management: Not Supported 00:22:43.189 Doorbell Buffer Config: Not Supported 00:22:43.189 Get LBA Status Capability: Not Supported 00:22:43.189 Command & Feature Lockdown Capability: Not Supported 00:22:43.189 Abort Command Limit: 4 00:22:43.189 Async Event Request Limit: 4 00:22:43.189 Number of Firmware Slots: N/A 00:22:43.189 Firmware Slot 1 Read-Only: N/A 00:22:43.189 Firmware Activation Without Reset: N/A 00:22:43.189 Multiple Update Detection Support: N/A 00:22:43.189 Firmware Update Granularity: No Information Provided 00:22:43.189 Per-Namespace SMART Log: No 00:22:43.189 Asymmetric Namespace Access Log Page: Not Supported 00:22:43.189 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:43.189 Command Effects Log Page: Supported 00:22:43.189 Get Log Page Extended Data: Supported 00:22:43.189 Telemetry Log Pages: Not Supported 00:22:43.189 Persistent Event Log Pages: Not Supported 00:22:43.189 Supported Log Pages Log Page: May Support 00:22:43.189 Commands Supported & Effects Log Page: Not Supported 00:22:43.189 Feature Identifiers & Effects Log Page:May Support 00:22:43.189 NVMe-MI Commands & Effects Log Page: May Support 00:22:43.189 Data Area 4 for Telemetry Log: Not Supported 00:22:43.189 Error Log Page Entries Supported: 128 00:22:43.189 Keep Alive: Supported 00:22:43.189 Keep Alive Granularity: 10000 ms 00:22:43.189 00:22:43.189 NVM Command Set Attributes 00:22:43.189 ========================== 00:22:43.189 Submission Queue Entry Size 00:22:43.189 Max: 64 00:22:43.189 Min: 64 00:22:43.189 Completion Queue Entry Size 00:22:43.189 Max: 16 00:22:43.189 Min: 16 00:22:43.189 Number of Namespaces: 32 00:22:43.189 Compare Command: Supported 00:22:43.189 Write Uncorrectable Command: Not Supported 00:22:43.189 Dataset Management Command: Supported 00:22:43.189 Write Zeroes Command: Supported 00:22:43.189 Set Features Save Field: Not Supported 00:22:43.189 Reservations: Supported 00:22:43.189 Timestamp: Not Supported 00:22:43.189 Copy: Supported 00:22:43.189 Volatile Write Cache: Present 00:22:43.189 Atomic Write Unit (Normal): 1 00:22:43.189 Atomic Write Unit (PFail): 1 00:22:43.189 Atomic Compare & Write Unit: 1 00:22:43.189 Fused Compare & Write: Supported 00:22:43.189 Scatter-Gather List 00:22:43.189 SGL Command Set: Supported 00:22:43.189 SGL Keyed: Supported 00:22:43.189 SGL Bit Bucket Descriptor: Not Supported 00:22:43.189 SGL Metadata Pointer: Not Supported 00:22:43.189 Oversized SGL: Not Supported 00:22:43.189 SGL Metadata Address: Not Supported 00:22:43.189 SGL Offset: Supported 00:22:43.189 Transport SGL Data Block: Not Supported 00:22:43.189 Replay Protected Memory Block: Not Supported 00:22:43.189 00:22:43.189 Firmware Slot Information 00:22:43.189 ========================= 00:22:43.189 Active slot: 1 00:22:43.189 Slot 1 Firmware Revision: 24.09 00:22:43.189 00:22:43.189 00:22:43.189 Commands Supported and Effects 00:22:43.189 ============================== 00:22:43.189 Admin Commands 00:22:43.189 -------------- 00:22:43.189 Get Log Page (02h): Supported 00:22:43.189 Identify (06h): Supported 00:22:43.189 Abort (08h): Supported 00:22:43.189 Set Features (09h): Supported 00:22:43.189 Get Features (0Ah): Supported 00:22:43.189 Asynchronous Event Request (0Ch): Supported 00:22:43.189 Keep Alive (18h): Supported 00:22:43.189 I/O Commands 00:22:43.189 ------------ 00:22:43.189 Flush (00h): Supported LBA-Change 00:22:43.189 Write (01h): Supported LBA-Change 00:22:43.189 Read (02h): Supported 00:22:43.189 Compare (05h): Supported 00:22:43.189 Write Zeroes (08h): Supported LBA-Change 00:22:43.189 Dataset Management (09h): Supported LBA-Change 00:22:43.189 Copy (19h): Supported LBA-Change 00:22:43.189 00:22:43.189 Error Log 00:22:43.189 ========= 00:22:43.189 00:22:43.189 Arbitration 00:22:43.189 =========== 00:22:43.189 Arbitration Burst: 1 00:22:43.189 00:22:43.189 Power Management 00:22:43.189 ================ 00:22:43.189 Number of Power States: 1 00:22:43.189 Current Power State: Power State #0 00:22:43.189 Power State #0: 00:22:43.189 Max Power: 0.00 W 00:22:43.189 Non-Operational State: Operational 00:22:43.189 Entry Latency: Not Reported 00:22:43.189 Exit Latency: Not Reported 00:22:43.189 Relative Read Throughput: 0 00:22:43.189 Relative Read Latency: 0 00:22:43.189 Relative Write Throughput: 0 00:22:43.189 Relative Write Latency: 0 00:22:43.189 Idle Power: Not Reported 00:22:43.189 Active Power: Not Reported 00:22:43.189 Non-Operational Permissive Mode: Not Supported 00:22:43.189 00:22:43.189 Health Information 00:22:43.189 ================== 00:22:43.189 Critical Warnings: 00:22:43.189 Available Spare Space: OK 00:22:43.189 Temperature: OK 00:22:43.189 Device Reliability: OK 00:22:43.189 Read Only: No 00:22:43.189 Volatile Memory Backup: OK 00:22:43.189 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:43.189 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:43.189 Available Spare: 0% 00:22:43.189 Available Spare Threshold: 0% 00:22:43.189 Life Percentage Used:[2024-07-24 19:23:29.183449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.189 [2024-07-24 19:23:29.183455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1907f00) 00:22:43.189 [2024-07-24 19:23:29.183462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.189 [2024-07-24 19:23:29.183476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19738c0, cid 7, qid 0 00:22:43.189 [2024-07-24 19:23:29.183583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.189 [2024-07-24 19:23:29.183590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.189 [2024-07-24 19:23:29.183594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.189 [2024-07-24 19:23:29.183599] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19738c0) on tqpair=0x1907f00 00:22:43.189 [2024-07-24 19:23:29.183627] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:43.189 [2024-07-24 19:23:29.183638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1972e40) on tqpair=0x1907f00 00:22:43.189 [2024-07-24 19:23:29.183645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-07-24 19:23:29.183651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1972fc0) on tqpair=0x1907f00 00:22:43.189 [2024-07-24 19:23:29.183657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-07-24 19:23:29.183663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1973140) on tqpair=0x1907f00 00:22:43.189 [2024-07-24 19:23:29.183668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-07-24 19:23:29.183674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.189 [2024-07-24 19:23:29.183679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-07-24 19:23:29.183690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.189 [2024-07-24 19:23:29.183694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.189 [2024-07-24 19:23:29.183699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.189 [2024-07-24 19:23:29.183706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.190 [2024-07-24 19:23:29.183726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.190 [2024-07-24 19:23:29.183812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.190 [2024-07-24 19:23:29.183819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.190 [2024-07-24 19:23:29.183823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.183828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.190 [2024-07-24 19:23:29.183835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.183839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.183844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.190 [2024-07-24 19:23:29.183851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.190 [2024-07-24 19:23:29.183866] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.190 [2024-07-24 19:23:29.183961] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.190 [2024-07-24 19:23:29.183967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.190 [2024-07-24 19:23:29.183972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.183976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.190 [2024-07-24 19:23:29.183982] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:43.190 [2024-07-24 19:23:29.183988] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:43.190 [2024-07-24 19:23:29.183998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.190 [2024-07-24 19:23:29.184015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.190 [2024-07-24 19:23:29.184026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.190 [2024-07-24 19:23:29.184108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.190 [2024-07-24 19:23:29.184115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.190 [2024-07-24 19:23:29.184119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.190 [2024-07-24 19:23:29.184134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184139] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.190 [2024-07-24 19:23:29.184150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.190 [2024-07-24 19:23:29.184161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.190 [2024-07-24 19:23:29.184244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.190 [2024-07-24 19:23:29.184251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.190 [2024-07-24 19:23:29.184257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.190 [2024-07-24 19:23:29.184271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.190 [2024-07-24 19:23:29.184287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.190 [2024-07-24 19:23:29.184299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.190 [2024-07-24 19:23:29.184382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.190 [2024-07-24 19:23:29.184388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.190 [2024-07-24 19:23:29.184393] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.190 [2024-07-24 19:23:29.184407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.190 [2024-07-24 19:23:29.184423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.190 [2024-07-24 19:23:29.184434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.190 [2024-07-24 19:23:29.184582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.190 [2024-07-24 19:23:29.184589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.190 [2024-07-24 19:23:29.184593] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.190 [2024-07-24 19:23:29.184609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.190 [2024-07-24 19:23:29.184625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.190 [2024-07-24 19:23:29.184636] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.190 [2024-07-24 19:23:29.184723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.190 [2024-07-24 19:23:29.184730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.190 [2024-07-24 19:23:29.184735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184739] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.190 [2024-07-24 19:23:29.184750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184755] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.190 [2024-07-24 19:23:29.184766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.190 [2024-07-24 19:23:29.184778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.190 [2024-07-24 19:23:29.184860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.190 [2024-07-24 19:23:29.184867] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.190 [2024-07-24 19:23:29.184871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.190 [2024-07-24 19:23:29.184887] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.184897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.190 [2024-07-24 19:23:29.184903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.190 [2024-07-24 19:23:29.184915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.190 [2024-07-24 19:23:29.185002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.190 [2024-07-24 19:23:29.185008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.190 [2024-07-24 19:23:29.185012] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.185017] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.190 [2024-07-24 19:23:29.185027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.185031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.185036] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.190 [2024-07-24 19:23:29.185043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.190 [2024-07-24 19:23:29.185054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.190 [2024-07-24 19:23:29.185142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.190 [2024-07-24 19:23:29.185150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.190 [2024-07-24 19:23:29.185154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.185159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.190 [2024-07-24 19:23:29.185169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.185173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.185178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.190 [2024-07-24 19:23:29.185185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.190 [2024-07-24 19:23:29.185196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.190 [2024-07-24 19:23:29.185340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.190 [2024-07-24 19:23:29.185346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.190 [2024-07-24 19:23:29.185351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.185356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.190 [2024-07-24 19:23:29.185366] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.185371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.190 [2024-07-24 19:23:29.185375] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.191 [2024-07-24 19:23:29.185382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.191 [2024-07-24 19:23:29.185394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.191 [2024-07-24 19:23:29.185477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.191 [2024-07-24 19:23:29.185483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.191 [2024-07-24 19:23:29.185487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.185492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.191 [2024-07-24 19:23:29.185503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.185508] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.185513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.191 [2024-07-24 19:23:29.185520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.191 [2024-07-24 19:23:29.185531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.191 [2024-07-24 19:23:29.185617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.191 [2024-07-24 19:23:29.185623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.191 [2024-07-24 19:23:29.185628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.185632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.191 [2024-07-24 19:23:29.185642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.185647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.185651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.191 [2024-07-24 19:23:29.185658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.191 [2024-07-24 19:23:29.185669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.191 [2024-07-24 19:23:29.185752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.191 [2024-07-24 19:23:29.185759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.191 [2024-07-24 19:23:29.185764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.185769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.191 [2024-07-24 19:23:29.185778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.185783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.185788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.191 [2024-07-24 19:23:29.185794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.191 [2024-07-24 19:23:29.185806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.191 [2024-07-24 19:23:29.185889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.191 [2024-07-24 19:23:29.185896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.191 [2024-07-24 19:23:29.185900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.185905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.191 [2024-07-24 19:23:29.185914] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.185919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.185924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.191 [2024-07-24 19:23:29.185930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.191 [2024-07-24 19:23:29.185941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.191 [2024-07-24 19:23:29.186025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.191 [2024-07-24 19:23:29.186032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.191 [2024-07-24 19:23:29.186036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.191 [2024-07-24 19:23:29.186051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186062] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.191 [2024-07-24 19:23:29.186069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.191 [2024-07-24 19:23:29.186080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.191 [2024-07-24 19:23:29.186160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.191 [2024-07-24 19:23:29.186167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.191 [2024-07-24 19:23:29.186171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.191 [2024-07-24 19:23:29.186185] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.191 [2024-07-24 19:23:29.186202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.191 [2024-07-24 19:23:29.186213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.191 [2024-07-24 19:23:29.186359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.191 [2024-07-24 19:23:29.186366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.191 [2024-07-24 19:23:29.186370] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.191 [2024-07-24 19:23:29.186385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.191 [2024-07-24 19:23:29.186401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.191 [2024-07-24 19:23:29.186412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.191 [2024-07-24 19:23:29.186495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.191 [2024-07-24 19:23:29.186502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.191 [2024-07-24 19:23:29.186506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.191 [2024-07-24 19:23:29.186520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.191 [2024-07-24 19:23:29.186537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.191 [2024-07-24 19:23:29.186548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.191 [2024-07-24 19:23:29.186630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.191 [2024-07-24 19:23:29.186637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.191 [2024-07-24 19:23:29.186641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.191 [2024-07-24 19:23:29.186655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.186666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.191 [2024-07-24 19:23:29.186673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.191 [2024-07-24 19:23:29.186684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.191 [2024-07-24 19:23:29.190725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.191 [2024-07-24 19:23:29.190734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.191 [2024-07-24 19:23:29.190738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.190759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.191 [2024-07-24 19:23:29.190769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.190774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.190779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1907f00) 00:22:43.191 [2024-07-24 19:23:29.190786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.191 [2024-07-24 19:23:29.190799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19732c0, cid 3, qid 0 00:22:43.191 [2024-07-24 19:23:29.191010] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:43.191 [2024-07-24 19:23:29.191017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:43.191 [2024-07-24 19:23:29.191021] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:43.191 [2024-07-24 19:23:29.191026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19732c0) on tqpair=0x1907f00 00:22:43.191 [2024-07-24 19:23:29.191035] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:22:43.191 0% 00:22:43.191 Data Units Read: 0 00:22:43.191 Data Units Written: 0 00:22:43.191 Host Read Commands: 0 00:22:43.191 Host Write Commands: 0 00:22:43.192 Controller Busy Time: 0 minutes 00:22:43.192 Power Cycles: 0 00:22:43.192 Power On Hours: 0 hours 00:22:43.192 Unsafe Shutdowns: 0 00:22:43.192 Unrecoverable Media Errors: 0 00:22:43.192 Lifetime Error Log Entries: 0 00:22:43.192 Warning Temperature Time: 0 minutes 00:22:43.192 Critical Temperature Time: 0 minutes 00:22:43.192 00:22:43.192 Number of Queues 00:22:43.192 ================ 00:22:43.192 Number of I/O Submission Queues: 127 00:22:43.192 Number of I/O Completion Queues: 127 00:22:43.192 00:22:43.192 Active Namespaces 00:22:43.192 ================= 00:22:43.192 Namespace ID:1 00:22:43.192 Error Recovery Timeout: Unlimited 00:22:43.192 Command Set Identifier: NVM (00h) 00:22:43.192 Deallocate: Supported 00:22:43.192 Deallocated/Unwritten Error: Not Supported 00:22:43.192 Deallocated Read Value: Unknown 00:22:43.192 Deallocate in Write Zeroes: Not Supported 00:22:43.192 Deallocated Guard Field: 0xFFFF 00:22:43.192 Flush: Supported 00:22:43.192 Reservation: Supported 00:22:43.192 Namespace Sharing Capabilities: Multiple Controllers 00:22:43.192 Size (in LBAs): 131072 (0GiB) 00:22:43.192 Capacity (in LBAs): 131072 (0GiB) 00:22:43.192 Utilization (in LBAs): 131072 (0GiB) 00:22:43.192 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:43.192 EUI64: ABCDEF0123456789 00:22:43.192 UUID: d4f23391-0cf3-4f95-8f99-579b629718ac 00:22:43.192 Thin Provisioning: Not Supported 00:22:43.192 Per-NS Atomic Units: Yes 00:22:43.192 Atomic Boundary Size (Normal): 0 00:22:43.192 Atomic Boundary Size (PFail): 0 00:22:43.192 Atomic Boundary Offset: 0 00:22:43.192 Maximum Single Source Range Length: 65535 00:22:43.192 Maximum Copy Length: 65535 00:22:43.192 Maximum Source Range Count: 1 00:22:43.192 NGUID/EUI64 Never Reused: No 00:22:43.192 Namespace Write Protected: No 00:22:43.192 Number of LBA Formats: 1 00:22:43.192 Current LBA Format: LBA Format #00 00:22:43.192 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:43.192 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:43.192 rmmod nvme_tcp 00:22:43.192 rmmod nvme_fabrics 00:22:43.192 rmmod nvme_keyring 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1606156 ']' 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1606156 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1606156 ']' 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1606156 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1606156 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1606156' 00:22:43.192 killing process with pid 1606156 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1606156 00:22:43.192 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1606156 00:22:43.451 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:43.451 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:43.451 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:43.451 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.451 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:43.451 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.451 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.451 19:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.988 19:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.988 00:22:45.988 real 0m10.738s 00:22:45.988 user 0m8.264s 00:22:45.988 sys 0m5.694s 00:22:45.988 19:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:45.988 19:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.988 ************************************ 00:22:45.988 END TEST nvmf_identify 00:22:45.988 ************************************ 00:22:45.988 19:23:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:45.988 19:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:45.988 19:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.988 19:23:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.988 ************************************ 00:22:45.988 START TEST nvmf_perf 00:22:45.988 ************************************ 00:22:45.988 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:45.988 * Looking for test storage... 00:22:45.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:45.988 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.988 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:45.988 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.989 19:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.585 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:52.586 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:52.586 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:52.586 Found net devices under 0000:af:00.0: cvl_0_0 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:52.586 Found net devices under 0000:af:00.1: cvl_0_1 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:52.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:22:52.586 00:22:52.586 --- 10.0.0.2 ping statistics --- 00:22:52.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.586 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:22:52.586 00:22:52.586 --- 10.0.0.1 ping statistics --- 00:22:52.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.586 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1609940 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1609940 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1609940 ']' 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:52.586 19:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:52.586 [2024-07-24 19:23:38.489652] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:52.586 [2024-07-24 19:23:38.489699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.586 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.586 [2024-07-24 19:23:38.563676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.586 [2024-07-24 19:23:38.638840] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.586 [2024-07-24 19:23:38.638876] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.586 [2024-07-24 19:23:38.638885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.587 [2024-07-24 19:23:38.638894] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.587 [2024-07-24 19:23:38.638904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.587 [2024-07-24 19:23:38.638954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.587 [2024-07-24 19:23:38.639160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.587 [2024-07-24 19:23:38.639227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.587 [2024-07-24 19:23:38.639229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.154 19:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.154 19:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:22:53.154 19:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:53.154 19:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:53.154 19:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:53.154 19:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.154 19:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:53.154 19:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:56.440 19:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:56.440 19:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:56.440 19:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:22:56.440 19:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:56.698 19:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:56.698 19:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:22:56.698 19:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:56.698 19:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:56.698 19:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:56.698 [2024-07-24 19:23:42.909184] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.956 19:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.956 19:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:56.956 19:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:57.215 19:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:57.215 19:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:57.473 19:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.473 [2024-07-24 19:23:43.659921] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.473 19:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:57.731 19:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:22:57.731 19:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:22:57.731 19:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:57.731 19:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:22:59.107 Initializing NVMe Controllers 00:22:59.107 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:22:59.107 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:22:59.107 Initialization complete. Launching workers. 00:22:59.107 ======================================================== 00:22:59.107 Latency(us) 00:22:59.107 Device Information : IOPS MiB/s Average min max 00:22:59.107 PCIE (0000:d8:00.0) NSID 1 from core 0: 101634.44 397.01 314.35 30.40 5177.33 00:22:59.107 ======================================================== 00:22:59.107 Total : 101634.44 397.01 314.35 30.40 5177.33 00:22:59.107 00:22:59.107 19:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:59.107 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.482 Initializing NVMe Controllers 00:23:00.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:00.482 Initialization complete. Launching workers. 00:23:00.482 ======================================================== 00:23:00.482 Latency(us) 00:23:00.482 Device Information : IOPS MiB/s Average min max 00:23:00.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 68.00 0.27 15261.07 255.22 45051.93 00:23:00.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 72.00 0.28 14062.47 6994.67 47886.22 00:23:00.482 ======================================================== 00:23:00.482 Total : 140.00 0.55 14644.65 255.22 47886.22 00:23:00.482 00:23:00.482 19:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:00.482 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.858 Initializing NVMe Controllers 00:23:01.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:01.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:01.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:01.858 Initialization complete. Launching workers. 00:23:01.858 ======================================================== 00:23:01.858 Latency(us) 00:23:01.858 Device Information : IOPS MiB/s Average min max 00:23:01.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10521.89 41.10 3043.04 535.10 6593.15 00:23:01.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3883.96 15.17 8283.95 5411.37 16084.13 00:23:01.858 ======================================================== 00:23:01.858 Total : 14405.86 56.27 4456.05 535.10 16084.13 00:23:01.858 00:23:01.858 19:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:01.858 19:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:01.858 19:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:01.858 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.390 Initializing NVMe Controllers 00:23:04.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.390 Controller IO queue size 128, less than required. 00:23:04.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.390 Controller IO queue size 128, less than required. 00:23:04.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:04.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:04.390 Initialization complete. Launching workers. 00:23:04.390 ======================================================== 00:23:04.390 Latency(us) 00:23:04.390 Device Information : IOPS MiB/s Average min max 00:23:04.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1001.08 250.27 131610.89 61256.86 218975.77 00:23:04.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 583.97 145.99 227129.79 54791.63 350590.43 00:23:04.390 ======================================================== 00:23:04.390 Total : 1585.05 396.26 166802.06 54791.63 350590.43 00:23:04.390 00:23:04.390 19:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:04.390 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.390 No valid NVMe controllers or AIO or URING devices found 00:23:04.390 Initializing NVMe Controllers 00:23:04.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.390 Controller IO queue size 128, less than required. 00:23:04.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.390 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:04.390 Controller IO queue size 128, less than required. 00:23:04.390 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.390 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:04.390 WARNING: Some requested NVMe devices were skipped 00:23:04.390 19:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:04.390 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.677 Initializing NVMe Controllers 00:23:07.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.677 Controller IO queue size 128, less than required. 00:23:07.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:07.677 Controller IO queue size 128, less than required. 00:23:07.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:07.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:07.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:07.677 Initialization complete. Launching workers. 00:23:07.677 00:23:07.677 ==================== 00:23:07.677 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:07.677 TCP transport: 00:23:07.677 polls: 45878 00:23:07.677 idle_polls: 12720 00:23:07.677 sock_completions: 33158 00:23:07.677 nvme_completions: 4153 00:23:07.677 submitted_requests: 6150 00:23:07.677 queued_requests: 1 00:23:07.677 00:23:07.677 ==================== 00:23:07.677 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:07.677 TCP transport: 00:23:07.677 polls: 46980 00:23:07.677 idle_polls: 15177 00:23:07.677 sock_completions: 31803 00:23:07.677 nvme_completions: 4163 00:23:07.677 submitted_requests: 6222 00:23:07.677 queued_requests: 1 00:23:07.677 ======================================================== 00:23:07.677 Latency(us) 00:23:07.677 Device Information : IOPS MiB/s Average min max 00:23:07.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1038.00 259.50 127601.75 65896.89 181685.75 00:23:07.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1040.50 260.12 125119.82 52635.70 178807.98 00:23:07.677 ======================================================== 00:23:07.677 Total : 2078.50 519.62 126359.29 52635.70 181685.75 00:23:07.677 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.677 rmmod nvme_tcp 00:23:07.677 rmmod nvme_fabrics 00:23:07.677 rmmod nvme_keyring 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1609940 ']' 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1609940 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1609940 ']' 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1609940 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1609940 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1609940' 00:23:07.677 killing process with pid 1609940 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1609940 00:23:07.677 19:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1609940 00:23:09.580 19:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:09.580 19:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:09.580 19:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:09.580 19:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.580 19:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:09.580 19:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.580 19:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.580 19:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.485 19:23:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:11.485 00:23:11.485 real 0m25.939s 00:23:11.485 user 1m7.674s 00:23:11.485 sys 0m8.500s 00:23:11.485 19:23:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:11.485 19:23:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:11.485 ************************************ 00:23:11.485 END TEST nvmf_perf 00:23:11.485 ************************************ 00:23:11.485 19:23:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:11.485 19:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:11.485 19:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:11.485 19:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.744 ************************************ 00:23:11.744 START TEST nvmf_fio_host 00:23:11.744 ************************************ 00:23:11.744 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:11.744 * Looking for test storage... 00:23:11.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:11.745 19:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:18.349 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:18.349 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:18.349 Found net devices under 0000:af:00.0: cvl_0_0 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:18.349 Found net devices under 0000:af:00.1: cvl_0_1 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:18.349 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:18.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:23:18.350 00:23:18.350 --- 10.0.0.2 ping statistics --- 00:23:18.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.350 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:23:18.350 00:23:18.350 --- 10.0.0.1 ping statistics --- 00:23:18.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.350 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1616743 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1616743 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1616743 ']' 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.350 19:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.350 [2024-07-24 19:24:04.457595] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:18.350 [2024-07-24 19:24:04.457645] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.350 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.350 [2024-07-24 19:24:04.531302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.609 [2024-07-24 19:24:04.604308] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.609 [2024-07-24 19:24:04.604348] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.609 [2024-07-24 19:24:04.604362] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.609 [2024-07-24 19:24:04.604372] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.609 [2024-07-24 19:24:04.604379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.609 [2024-07-24 19:24:04.604428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.609 [2024-07-24 19:24:04.604525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.609 [2024-07-24 19:24:04.604610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.609 [2024-07-24 19:24:04.604612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.177 19:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.177 19:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:23:19.177 19:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:19.436 [2024-07-24 19:24:05.427464] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.436 19:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:19.436 19:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.436 19:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.436 19:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:19.695 Malloc1 00:23:19.695 19:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:19.695 19:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:19.954 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:20.213 [2024-07-24 19:24:06.252689] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.213 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:20.477 19:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:20.748 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:20.748 fio-3.35 00:23:20.748 Starting 1 thread 00:23:20.748 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.315 00:23:23.315 test: (groupid=0, jobs=1): err= 0: pid=1617534: Wed Jul 24 19:24:09 2024 00:23:23.315 read: IOPS=12.4k, BW=48.6MiB/s (50.9MB/s)(97.4MiB/2005msec) 00:23:23.315 slat (nsec): min=1535, max=248262, avg=1668.09, stdev=2233.61 00:23:23.315 clat (usec): min=3396, max=9532, avg=5682.85, stdev=410.65 00:23:23.315 lat (usec): min=3430, max=9534, avg=5684.52, stdev=410.67 00:23:23.315 clat percentiles (usec): 00:23:23.315 | 1.00th=[ 4686], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 5342], 00:23:23.316 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5800], 00:23:23.316 | 70.00th=[ 5866], 80.00th=[ 5997], 90.00th=[ 6194], 95.00th=[ 6325], 00:23:23.316 | 99.00th=[ 6652], 99.50th=[ 6718], 99.90th=[ 8160], 99.95th=[ 8979], 00:23:23.316 | 99.99th=[ 9503] 00:23:23.316 bw ( KiB/s): min=48512, max=50480, per=99.97%, avg=49724.00, stdev=848.52, samples=4 00:23:23.316 iops : min=12128, max=12620, avg=12431.00, stdev=212.13, samples=4 00:23:23.316 write: IOPS=12.4k, BW=48.5MiB/s (50.9MB/s)(97.3MiB/2005msec); 0 zone resets 00:23:23.316 slat (nsec): min=1585, max=252437, avg=1748.30, stdev=1746.91 00:23:23.316 clat (usec): min=2496, max=8312, avg=4553.04, stdev=346.85 00:23:23.316 lat (usec): min=2511, max=8313, avg=4554.79, stdev=346.86 00:23:23.316 clat percentiles (usec): 00:23:23.316 | 1.00th=[ 3720], 5.00th=[ 4015], 10.00th=[ 4146], 20.00th=[ 4293], 00:23:23.316 | 30.00th=[ 4359], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:23:23.316 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5080], 00:23:23.316 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 6521], 99.95th=[ 7570], 00:23:23.316 | 99.99th=[ 8291] 00:23:23.316 bw ( KiB/s): min=49176, max=50176, per=100.00%, avg=49690.00, stdev=409.04, samples=4 00:23:23.316 iops : min=12294, max=12544, avg=12422.50, stdev=102.26, samples=4 00:23:23.316 lat (msec) : 4=2.40%, 10=97.60% 00:23:23.316 cpu : usr=62.92%, sys=31.39%, ctx=66, majf=0, minf=5 00:23:23.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:23.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:23.316 issued rwts: total=24932,24902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:23.316 00:23:23.316 Run status group 0 (all jobs): 00:23:23.316 READ: bw=48.6MiB/s (50.9MB/s), 48.6MiB/s-48.6MiB/s (50.9MB/s-50.9MB/s), io=97.4MiB (102MB), run=2005-2005msec 00:23:23.316 WRITE: bw=48.5MiB/s (50.9MB/s), 48.5MiB/s-48.5MiB/s (50.9MB/s-50.9MB/s), io=97.3MiB (102MB), run=2005-2005msec 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:23.316 19:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:23.582 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:23.582 fio-3.35 00:23:23.582 Starting 1 thread 00:23:23.582 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.120 00:23:26.120 test: (groupid=0, jobs=1): err= 0: pid=1618191: Wed Jul 24 19:24:12 2024 00:23:26.120 read: IOPS=10.5k, BW=164MiB/s (172MB/s)(330MiB/2008msec) 00:23:26.120 slat (nsec): min=2340, max=82478, avg=2706.78, stdev=1170.55 00:23:26.120 clat (usec): min=2672, max=51168, avg=7223.95, stdev=2863.81 00:23:26.120 lat (usec): min=2675, max=51170, avg=7226.66, stdev=2863.91 00:23:26.120 clat percentiles (usec): 00:23:26.120 | 1.00th=[ 3621], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5538], 00:23:26.120 | 30.00th=[ 5997], 40.00th=[ 6456], 50.00th=[ 6915], 60.00th=[ 7439], 00:23:26.120 | 70.00th=[ 8029], 80.00th=[ 8717], 90.00th=[ 9634], 95.00th=[10421], 00:23:26.120 | 99.00th=[12649], 99.50th=[13435], 99.90th=[47973], 99.95th=[48497], 00:23:26.120 | 99.99th=[49021] 00:23:26.120 bw ( KiB/s): min=82272, max=93184, per=51.46%, avg=86536.00, stdev=5257.77, samples=4 00:23:26.120 iops : min= 5142, max= 5824, avg=5408.50, stdev=328.61, samples=4 00:23:26.120 write: IOPS=6413, BW=100MiB/s (105MB/s)(177MiB/1763msec); 0 zone resets 00:23:26.120 slat (usec): min=28, max=263, avg=29.95, stdev= 5.20 00:23:26.120 clat (usec): min=4977, max=53427, avg=8409.08, stdev=3479.53 00:23:26.120 lat (usec): min=5006, max=53457, avg=8439.03, stdev=3479.97 00:23:26.120 clat percentiles (usec): 00:23:26.120 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 7046], 00:23:26.120 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8291], 00:23:26.120 | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[10159], 95.00th=[10814], 00:23:26.120 | 99.00th=[13173], 99.50th=[46400], 99.90th=[52691], 99.95th=[53216], 00:23:26.120 | 99.99th=[53216] 00:23:26.120 bw ( KiB/s): min=85824, max=96160, per=87.68%, avg=89976.00, stdev=4766.63, samples=4 00:23:26.120 iops : min= 5364, max= 6010, avg=5623.50, stdev=297.91, samples=4 00:23:26.120 lat (msec) : 4=1.74%, 10=89.29%, 20=8.57%, 50=0.28%, 100=0.11% 00:23:26.120 cpu : usr=83.11%, sys=15.05%, ctx=71, majf=0, minf=2 00:23:26.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:23:26.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:26.120 issued rwts: total=21103,11307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:26.120 00:23:26.120 Run status group 0 (all jobs): 00:23:26.120 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=330MiB (346MB), run=2008-2008msec 00:23:26.120 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=177MiB (185MB), run=1763-1763msec 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:26.120 rmmod nvme_tcp 00:23:26.120 rmmod nvme_fabrics 00:23:26.120 rmmod nvme_keyring 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1616743 ']' 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1616743 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1616743 ']' 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1616743 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.120 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1616743 00:23:26.380 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:26.380 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:26.380 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1616743' 00:23:26.380 killing process with pid 1616743 00:23:26.380 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1616743 00:23:26.380 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1616743 00:23:26.380 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:26.380 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:26.380 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:26.380 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:26.380 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:26.381 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.381 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.381 19:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.916 19:24:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:28.916 00:23:28.916 real 0m16.907s 00:23:28.916 user 0m54.297s 00:23:28.916 sys 0m7.514s 00:23:28.916 19:24:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.916 19:24:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.916 ************************************ 00:23:28.916 END TEST nvmf_fio_host 00:23:28.916 ************************************ 00:23:28.916 19:24:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:28.916 19:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:28.916 19:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.917 ************************************ 00:23:28.917 START TEST nvmf_failover 00:23:28.917 ************************************ 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:28.917 * Looking for test storage... 00:23:28.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:28.917 19:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:35.494 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:35.494 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:35.494 Found net devices under 0000:af:00.0: cvl_0_0 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.494 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:35.495 Found net devices under 0000:af:00.1: cvl_0_1 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:35.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:23:35.495 00:23:35.495 --- 10.0.0.2 ping statistics --- 00:23:35.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.495 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:23:35.495 00:23:35.495 --- 10.0.0.1 ping statistics --- 00:23:35.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.495 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1622206 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1622206 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1622206 ']' 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.495 19:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:35.755 [2024-07-24 19:24:21.739724] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:35.755 [2024-07-24 19:24:21.739777] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.755 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.755 [2024-07-24 19:24:21.812809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:35.755 [2024-07-24 19:24:21.885810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.755 [2024-07-24 19:24:21.885849] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.755 [2024-07-24 19:24:21.885858] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.755 [2024-07-24 19:24:21.885867] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.755 [2024-07-24 19:24:21.885874] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.755 [2024-07-24 19:24:21.885972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.755 [2024-07-24 19:24:21.886059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.755 [2024-07-24 19:24:21.886061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.323 19:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:36.323 19:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:36.323 19:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.323 19:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:36.323 19:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:36.582 19:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.582 19:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:36.582 [2024-07-24 19:24:22.738650] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.582 19:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:36.841 Malloc0 00:23:36.841 19:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:37.100 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:37.100 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.362 [2024-07-24 19:24:23.488263] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.362 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:37.624 [2024-07-24 19:24:23.676791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:37.624 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:37.624 [2024-07-24 19:24:23.849352] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:37.950 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1622680 00:23:37.950 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:37.950 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:37.950 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1622680 /var/tmp/bdevperf.sock 00:23:37.950 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1622680 ']' 00:23:37.950 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.950 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.950 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.950 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.950 19:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:38.517 19:24:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.517 19:24:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:38.517 19:24:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.775 NVMe0n1 00:23:39.034 19:24:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:39.034 00:23:39.292 19:24:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1622947 00:23:39.292 19:24:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:39.292 19:24:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:40.230 19:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.230 [2024-07-24 19:24:26.450810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.450996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.451004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.451013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.451022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.451030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.451038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.451046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.451055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.451063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.451072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.451081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.230 [2024-07-24 19:24:26.451089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.231 [2024-07-24 19:24:26.451756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.232 [2024-07-24 19:24:26.451928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95a1e0 is same with the state(5) to be set 00:23:40.491 19:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:43.780 19:24:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:43.780 00:23:43.780 19:24:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:44.040 19:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:47.326 19:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.326 [2024-07-24 19:24:33.237479] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.326 19:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:48.262 19:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:48.262 [2024-07-24 19:24:34.434643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.262 [2024-07-24 19:24:34.434688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.262 [2024-07-24 19:24:34.434698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.262 [2024-07-24 19:24:34.434707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.262 [2024-07-24 19:24:34.434730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.262 [2024-07-24 19:24:34.434739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 [2024-07-24 19:24:34.434963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95bd50 is same with the state(5) to be set 00:23:48.263 19:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1622947 00:23:54.841 0 00:23:54.841 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1622680 00:23:54.841 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1622680 ']' 00:23:54.841 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1622680 00:23:54.841 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:54.841 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.841 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1622680 00:23:54.841 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:54.841 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:54.841 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1622680' 00:23:54.841 killing process with pid 1622680 00:23:54.841 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1622680 00:23:54.841 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1622680 00:23:54.841 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:54.841 [2024-07-24 19:24:23.922577] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:54.841 [2024-07-24 19:24:23.922631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622680 ] 00:23:54.841 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.841 [2024-07-24 19:24:23.993344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.841 [2024-07-24 19:24:24.064927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.841 Running I/O for 15 seconds... 00:23:54.841 [2024-07-24 19:24:26.452874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.841 [2024-07-24 19:24:26.452911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.841 [2024-07-24 19:24:26.452931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.452941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.452954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.452963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.452974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.452984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.452995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.842 [2024-07-24 19:24:26.453545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.842 [2024-07-24 19:24:26.453556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.453814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.453833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.453852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.453872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.453893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.453914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.453934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.843 [2024-07-24 19:24:26.453953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.453972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.453983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.453991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.454002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.454010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.454021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.454030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.454041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.454050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.454060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.454069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.454080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.454089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.454100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.454109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.454119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.454128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.454140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.454151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.454161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.454170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.843 [2024-07-24 19:24:26.454181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.843 [2024-07-24 19:24:26.454190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.844 [2024-07-24 19:24:26.454777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.844 [2024-07-24 19:24:26.454786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.454796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.845 [2024-07-24 19:24:26.454805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.454816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.845 [2024-07-24 19:24:26.454824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.454836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.845 [2024-07-24 19:24:26.454845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.454856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.845 [2024-07-24 19:24:26.454865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.454877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.845 [2024-07-24 19:24:26.454886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.454898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.845 [2024-07-24 19:24:26.454907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.454931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.454940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100536 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.454949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.454961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.454968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.454976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100544 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.454985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.454994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100552 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100560 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100568 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100576 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100584 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100592 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100600 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100608 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100616 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100624 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100632 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100640 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100648 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100656 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.845 [2024-07-24 19:24:26.455461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.845 [2024-07-24 19:24:26.455468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.845 [2024-07-24 19:24:26.455476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100664 len:8 PRP1 0x0 PRP2 0x0 00:23:54.845 [2024-07-24 19:24:26.455485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.455494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.846 [2024-07-24 19:24:26.455501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.846 [2024-07-24 19:24:26.455508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100672 len:8 PRP1 0x0 PRP2 0x0 00:23:54.846 [2024-07-24 19:24:26.455517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.455526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.846 [2024-07-24 19:24:26.455533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.846 [2024-07-24 19:24:26.455541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100680 len:8 PRP1 0x0 PRP2 0x0 00:23:54.846 [2024-07-24 19:24:26.455550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.455559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.846 [2024-07-24 19:24:26.455566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.846 [2024-07-24 19:24:26.455573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100688 len:8 PRP1 0x0 PRP2 0x0 00:23:54.846 [2024-07-24 19:24:26.455582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.846 [2024-07-24 19:24:26.467513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.846 [2024-07-24 19:24:26.467524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100696 len:8 PRP1 0x0 PRP2 0x0 00:23:54.846 [2024-07-24 19:24:26.467534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.846 [2024-07-24 19:24:26.467553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.846 [2024-07-24 19:24:26.467562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100704 len:8 PRP1 0x0 PRP2 0x0 00:23:54.846 [2024-07-24 19:24:26.467573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.846 [2024-07-24 19:24:26.467593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.846 [2024-07-24 19:24:26.467602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100712 len:8 PRP1 0x0 PRP2 0x0 00:23:54.846 [2024-07-24 19:24:26.467615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.846 [2024-07-24 19:24:26.467634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.846 [2024-07-24 19:24:26.467643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100720 len:8 PRP1 0x0 PRP2 0x0 00:23:54.846 [2024-07-24 19:24:26.467655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.846 [2024-07-24 19:24:26.467674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.846 [2024-07-24 19:24:26.467684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100728 len:8 PRP1 0x0 PRP2 0x0 00:23:54.846 [2024-07-24 19:24:26.467694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.846 [2024-07-24 19:24:26.467717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.846 [2024-07-24 19:24:26.467727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100736 len:8 PRP1 0x0 PRP2 0x0 00:23:54.846 [2024-07-24 19:24:26.467737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.846 [2024-07-24 19:24:26.467756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.846 [2024-07-24 19:24:26.467765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100744 len:8 PRP1 0x0 PRP2 0x0 00:23:54.846 [2024-07-24 19:24:26.467775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.846 [2024-07-24 19:24:26.467795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.846 [2024-07-24 19:24:26.467803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100752 len:8 PRP1 0x0 PRP2 0x0 00:23:54.846 [2024-07-24 19:24:26.467813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467860] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x134c990 was disconnected and freed. reset controller. 00:23:54.846 [2024-07-24 19:24:26.467873] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:54.846 [2024-07-24 19:24:26.467900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.846 [2024-07-24 19:24:26.467912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.846 [2024-07-24 19:24:26.467934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.846 [2024-07-24 19:24:26.467956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.846 [2024-07-24 19:24:26.467980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:26.467991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:54.846 [2024-07-24 19:24:26.468037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1359590 (9): Bad file descriptor 00:23:54.846 [2024-07-24 19:24:26.471150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:54.846 [2024-07-24 19:24:26.498901] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:54.846 [2024-07-24 19:24:30.042036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.846 [2024-07-24 19:24:30.042081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:30.042099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.846 [2024-07-24 19:24:30.042109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:30.042121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.846 [2024-07-24 19:24:30.042130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:30.042141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.846 [2024-07-24 19:24:30.042151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:30.042161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.846 [2024-07-24 19:24:30.042171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:30.042182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.846 [2024-07-24 19:24:30.042191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.846 [2024-07-24 19:24:30.042202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.846 [2024-07-24 19:24:30.042213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.847 [2024-07-24 19:24:30.042298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.847 [2024-07-24 19:24:30.042319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.847 [2024-07-24 19:24:30.042340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.847 [2024-07-24 19:24:30.042363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.847 [2024-07-24 19:24:30.042384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.847 [2024-07-24 19:24:30.042405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.847 [2024-07-24 19:24:30.042427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.847 [2024-07-24 19:24:30.042950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.847 [2024-07-24 19:24:30.042961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.042970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.042981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.042990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.848 [2024-07-24 19:24:30.043890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.043910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.043930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.043950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.043970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.043980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.043990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.848 [2024-07-24 19:24:30.044322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.848 [2024-07-24 19:24:30.044331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.849 [2024-07-24 19:24:30.044352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.849 [2024-07-24 19:24:30.044373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:30.044681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137d3b0 is same with the state(5) to be set 00:23:54.849 [2024-07-24 19:24:30.044702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.849 [2024-07-24 19:24:30.044712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.849 [2024-07-24 19:24:30.044724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63384 len:8 PRP1 0x0 PRP2 0x0 00:23:54.849 [2024-07-24 19:24:30.044733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044778] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x137d3b0 was disconnected and freed. reset controller. 00:23:54.849 [2024-07-24 19:24:30.044789] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:54.849 [2024-07-24 19:24:30.044813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.849 [2024-07-24 19:24:30.044823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.849 [2024-07-24 19:24:30.044843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.849 [2024-07-24 19:24:30.044862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.849 [2024-07-24 19:24:30.044881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:30.044890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:54.849 [2024-07-24 19:24:30.047577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:54.849 [2024-07-24 19:24:30.047608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1359590 (9): Bad file descriptor 00:23:54.849 [2024-07-24 19:24:30.154911] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:54.849 [2024-07-24 19:24:34.435200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.849 [2024-07-24 19:24:34.435242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.849 [2024-07-24 19:24:34.435264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.849 [2024-07-24 19:24:34.435284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.849 [2024-07-24 19:24:34.435303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1359590 is same with the state(5) to be set 00:23:54.849 [2024-07-24 19:24:34.435357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.849 [2024-07-24 19:24:34.435673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.849 [2024-07-24 19:24:34.435694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.849 [2024-07-24 19:24:34.435720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.849 [2024-07-24 19:24:34.435742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.849 [2024-07-24 19:24:34.435763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.849 [2024-07-24 19:24:34.435784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.849 [2024-07-24 19:24:34.435805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.849 [2024-07-24 19:24:34.435824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.435981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.435991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.436000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.436010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.436019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.436030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.436039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.436051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.436060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.436071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.436080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.436090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.436099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.436110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.849 [2024-07-24 19:24:34.436118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.849 [2024-07-24 19:24:34.436129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.436805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.436980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.436989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.437008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.437028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.437047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.437066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.437087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.437106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.437126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.437147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.437168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.437189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.437210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.437229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.437250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.437272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.850 [2024-07-24 19:24:34.437292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.437314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.850 [2024-07-24 19:24:34.437325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.850 [2024-07-24 19:24:34.437335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.851 [2024-07-24 19:24:34.437479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.851 [2024-07-24 19:24:34.437498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.851 [2024-07-24 19:24:34.437518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.851 [2024-07-24 19:24:34.437539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.851 [2024-07-24 19:24:34.437561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.851 [2024-07-24 19:24:34.437580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.851 [2024-07-24 19:24:34.437603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.851 [2024-07-24 19:24:34.437622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.851 [2024-07-24 19:24:34.437959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.437980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:54.851 [2024-07-24 19:24:34.437988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:54.851 [2024-07-24 19:24:34.437996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19408 len:8 PRP1 0x0 PRP2 0x0 00:23:54.851 [2024-07-24 19:24:34.438006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.851 [2024-07-24 19:24:34.438051] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x137d3b0 was disconnected and freed. reset controller. 00:23:54.851 [2024-07-24 19:24:34.438063] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:54.851 [2024-07-24 19:24:34.438074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:54.851 [2024-07-24 19:24:34.440764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:54.851 [2024-07-24 19:24:34.440797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1359590 (9): Bad file descriptor 00:23:54.851 [2024-07-24 19:24:34.515894] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:54.851 00:23:54.851 Latency(us) 00:23:54.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.851 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:54.851 Verification LBA range: start 0x0 length 0x4000 00:23:54.851 NVMe0n1 : 15.00 12325.42 48.15 671.08 0.00 9827.32 786.43 23697.82 00:23:54.851 =================================================================================================================== 00:23:54.851 Total : 12325.42 48.15 671.08 0.00 9827.32 786.43 23697.82 00:23:54.851 Received shutdown signal, test time was about 15.000000 seconds 00:23:54.851 00:23:54.851 Latency(us) 00:23:54.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.851 =================================================================================================================== 00:23:54.851 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.851 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:54.851 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:54.851 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:54.851 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1625330 00:23:54.851 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:54.851 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1625330 /var/tmp/bdevperf.sock 00:23:54.851 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1625330 ']' 00:23:54.851 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.851 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.851 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.851 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.851 19:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:55.418 19:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.418 19:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:55.418 19:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:55.678 [2024-07-24 19:24:41.682234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:55.678 19:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:55.678 [2024-07-24 19:24:41.854686] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:55.678 19:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:55.937 NVMe0n1 00:23:55.937 19:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:56.196 00:23:56.455 19:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:56.455 00:23:56.455 19:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:56.455 19:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:56.714 19:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:56.973 19:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:00.269 19:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:00.269 19:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:00.269 19:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1626385 00:24:00.269 19:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:00.269 19:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1626385 00:24:01.283 0 00:24:01.283 19:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.283 [2024-07-24 19:24:40.714350] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:24:01.283 [2024-07-24 19:24:40.714403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625330 ] 00:24:01.283 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.283 [2024-07-24 19:24:40.783575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.283 [2024-07-24 19:24:40.848542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.283 [2024-07-24 19:24:43.024485] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:01.283 [2024-07-24 19:24:43.024529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.283 [2024-07-24 19:24:43.024543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.283 [2024-07-24 19:24:43.024553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.283 [2024-07-24 19:24:43.024563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.283 [2024-07-24 19:24:43.024573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.283 [2024-07-24 19:24:43.024582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.283 [2024-07-24 19:24:43.024591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.283 [2024-07-24 19:24:43.024600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.283 [2024-07-24 19:24:43.024609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.283 [2024-07-24 19:24:43.024636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.283 [2024-07-24 19:24:43.024651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be3590 (9): Bad file descriptor 00:24:01.283 [2024-07-24 19:24:43.085571] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:01.283 Running I/O for 1 seconds... 00:24:01.283 00:24:01.283 Latency(us) 00:24:01.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.283 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:01.283 Verification LBA range: start 0x0 length 0x4000 00:24:01.284 NVMe0n1 : 1.00 11620.08 45.39 0.00 0.00 10976.95 2188.90 15099.49 00:24:01.284 =================================================================================================================== 00:24:01.284 Total : 11620.08 45.39 0.00 0.00 10976.95 2188.90 15099.49 00:24:01.284 19:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:01.284 19:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:01.543 19:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:01.543 19:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:01.543 19:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:01.802 19:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.061 19:24:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:05.350 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:05.350 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:05.350 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1625330 00:24:05.350 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1625330 ']' 00:24:05.350 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1625330 00:24:05.350 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:05.350 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.350 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1625330 00:24:05.350 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:05.350 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:05.351 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1625330' 00:24:05.351 killing process with pid 1625330 00:24:05.351 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1625330 00:24:05.351 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1625330 00:24:05.351 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:05.351 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:05.610 rmmod nvme_tcp 00:24:05.610 rmmod nvme_fabrics 00:24:05.610 rmmod nvme_keyring 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1622206 ']' 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1622206 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1622206 ']' 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1622206 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1622206 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1622206' 00:24:05.610 killing process with pid 1622206 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1622206 00:24:05.610 19:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1622206 00:24:05.870 19:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:05.870 19:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:05.870 19:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:05.870 19:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.870 19:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.870 19:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.870 19:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.870 19:24:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:08.407 00:24:08.407 real 0m39.388s 00:24:08.407 user 2m1.325s 00:24:08.407 sys 0m9.932s 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:08.407 ************************************ 00:24:08.407 END TEST nvmf_failover 00:24:08.407 ************************************ 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.407 ************************************ 00:24:08.407 START TEST nvmf_host_discovery 00:24:08.407 ************************************ 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:08.407 * Looking for test storage... 00:24:08.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.407 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:08.408 19:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:14.983 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:14.983 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:14.983 Found net devices under 0000:af:00.0: cvl_0_0 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:14.983 Found net devices under 0000:af:00.1: cvl_0_1 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:14.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:24:14.983 00:24:14.983 --- 10.0.0.2 ping statistics --- 00:24:14.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.983 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:24:14.983 19:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:24:14.983 00:24:14.983 --- 10.0.0.1 ping statistics --- 00:24:14.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.983 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.983 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.984 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1630865 00:24:14.984 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:14.984 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1630865 00:24:14.984 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1630865 ']' 00:24:14.984 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.984 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.984 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.984 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.984 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.984 [2024-07-24 19:25:01.092477] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:24:14.984 [2024-07-24 19:25:01.092530] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.984 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.984 [2024-07-24 19:25:01.164951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.243 [2024-07-24 19:25:01.236621] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.243 [2024-07-24 19:25:01.236657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.243 [2024-07-24 19:25:01.236666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.243 [2024-07-24 19:25:01.236674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.243 [2024-07-24 19:25:01.236681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.243 [2024-07-24 19:25:01.236708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.811 [2024-07-24 19:25:01.921558] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.811 [2024-07-24 19:25:01.933703] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:15.811 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.812 null0 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.812 null1 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1631117 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1631117 /tmp/host.sock 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1631117 ']' 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:15.812 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.812 19:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.812 [2024-07-24 19:25:02.010307] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:24:15.812 [2024-07-24 19:25:02.010354] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631117 ] 00:24:15.812 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.071 [2024-07-24 19:25:02.078890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.071 [2024-07-24 19:25:02.153753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:16.640 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:16.900 19:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:16.900 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.160 [2024-07-24 19:25:03.152884] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:24:17.160 19:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:17.728 [2024-07-24 19:25:03.881258] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:17.728 [2024-07-24 19:25:03.881278] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:17.728 [2024-07-24 19:25:03.881291] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:17.986 [2024-07-24 19:25:04.007673] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:17.986 [2024-07-24 19:25:04.187095] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:17.986 [2024-07-24 19:25:04.187114] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.246 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:18.506 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.507 [2024-07-24 19:25:04.677021] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:18.507 [2024-07-24 19:25:04.678013] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:18.507 [2024-07-24 19:25:04.678035] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.507 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.766 [2024-07-24 19:25:04.804405] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:18.766 19:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:18.766 [2024-07-24 19:25:04.866951] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:18.766 [2024-07-24 19:25:04.866968] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:18.766 [2024-07-24 19:25:04.866975] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.704 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.705 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:19.705 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:19.705 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:19.705 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:19.705 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:19.705 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.705 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.965 [2024-07-24 19:25:05.945141] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:19.965 [2024-07-24 19:25:05.945162] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:19.965 [2024-07-24 19:25:05.949101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.965 [2024-07-24 19:25:05.949120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.965 [2024-07-24 19:25:05.949131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.965 [2024-07-24 19:25:05.949141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.965 [2024-07-24 19:25:05.949151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.965 [2024-07-24 19:25:05.949160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.965 [2024-07-24 19:25:05.949170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.965 [2024-07-24 19:25:05.949179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.965 [2024-07-24 19:25:05.949188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6fd0 is same with the state(5) to be set 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:19.965 [2024-07-24 19:25:05.959115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b6fd0 (9): Bad file descriptor 00:24:19.965 [2024-07-24 19:25:05.969152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:19.965 [2024-07-24 19:25:05.969521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.965 [2024-07-24 19:25:05.969538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b6fd0 with addr=10.0.0.2, port=4420 00:24:19.965 [2024-07-24 19:25:05.969549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6fd0 is same with the state(5) to be set 00:24:19.965 [2024-07-24 19:25:05.969562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b6fd0 (9): Bad file descriptor 00:24:19.965 [2024-07-24 19:25:05.969582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:19.965 [2024-07-24 19:25:05.969592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:19.965 [2024-07-24 19:25:05.969603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:19.965 [2024-07-24 19:25:05.969615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.965 19:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.965 [2024-07-24 19:25:05.979208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:19.965 [2024-07-24 19:25:05.979450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.965 [2024-07-24 19:25:05.979465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b6fd0 with addr=10.0.0.2, port=4420 00:24:19.965 [2024-07-24 19:25:05.979475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6fd0 is same with the state(5) to be set 00:24:19.965 [2024-07-24 19:25:05.979489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b6fd0 (9): Bad file descriptor 00:24:19.965 [2024-07-24 19:25:05.979500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:19.965 [2024-07-24 19:25:05.979509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:19.965 [2024-07-24 19:25:05.979518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:19.965 [2024-07-24 19:25:05.979530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.965 [2024-07-24 19:25:05.989261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:19.965 [2024-07-24 19:25:05.989561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.965 [2024-07-24 19:25:05.989578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b6fd0 with addr=10.0.0.2, port=4420 00:24:19.965 [2024-07-24 19:25:05.989588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6fd0 is same with the state(5) to be set 00:24:19.965 [2024-07-24 19:25:05.989601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b6fd0 (9): Bad file descriptor 00:24:19.965 [2024-07-24 19:25:05.989614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:19.965 [2024-07-24 19:25:05.989623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:19.965 [2024-07-24 19:25:05.989632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:19.965 [2024-07-24 19:25:05.989647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.965 [2024-07-24 19:25:05.999320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:19.965 [2024-07-24 19:25:05.999643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.965 [2024-07-24 19:25:05.999658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b6fd0 with addr=10.0.0.2, port=4420 00:24:19.965 [2024-07-24 19:25:05.999668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6fd0 is same with the state(5) to be set 00:24:19.965 [2024-07-24 19:25:05.999681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b6fd0 (9): Bad file descriptor 00:24:19.965 [2024-07-24 19:25:05.999702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:19.965 [2024-07-24 19:25:05.999713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:19.965 [2024-07-24 19:25:05.999727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:19.965 [2024-07-24 19:25:05.999739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:19.965 [2024-07-24 19:25:06.009375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:19.965 [2024-07-24 19:25:06.010361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.965 [2024-07-24 19:25:06.010385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b6fd0 with addr=10.0.0.2, port=4420 00:24:19.965 [2024-07-24 19:25:06.010398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6fd0 is same with the state(5) to be set 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:19.965 [2024-07-24 19:25:06.010416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b6fd0 (9): Bad file descriptor 00:24:19.965 [2024-07-24 19:25:06.010448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:19.965 [2024-07-24 19:25:06.010459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:19.965 [2024-07-24 19:25:06.010472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:19.965 [2024-07-24 19:25:06.010489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:19.965 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.965 [2024-07-24 19:25:06.019429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:19.965 [2024-07-24 19:25:06.019639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.965 [2024-07-24 19:25:06.019655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b6fd0 with addr=10.0.0.2, port=4420 00:24:19.965 [2024-07-24 19:25:06.019667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6fd0 is same with the state(5) to be set 00:24:19.965 [2024-07-24 19:25:06.019680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b6fd0 (9): Bad file descriptor 00:24:19.965 [2024-07-24 19:25:06.019693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:19.965 [2024-07-24 19:25:06.019702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:19.966 [2024-07-24 19:25:06.019712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:19.966 [2024-07-24 19:25:06.019730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.966 [2024-07-24 19:25:06.029488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:19.966 [2024-07-24 19:25:06.029798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.966 [2024-07-24 19:25:06.029814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b6fd0 with addr=10.0.0.2, port=4420 00:24:19.966 [2024-07-24 19:25:06.029824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6fd0 is same with the state(5) to be set 00:24:19.966 [2024-07-24 19:25:06.029838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b6fd0 (9): Bad file descriptor 00:24:19.966 [2024-07-24 19:25:06.029858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:19.966 [2024-07-24 19:25:06.029867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:19.966 [2024-07-24 19:25:06.029876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:19.966 [2024-07-24 19:25:06.029888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.966 [2024-07-24 19:25:06.032863] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:19.966 [2024-07-24 19:25:06.032880] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:19.966 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.226 19:25:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.165 [2024-07-24 19:25:07.380435] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:21.165 [2024-07-24 19:25:07.380453] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:21.165 [2024-07-24 19:25:07.380464] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:21.495 [2024-07-24 19:25:07.508862] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:21.754 [2024-07-24 19:25:07.817067] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:21.754 [2024-07-24 19:25:07.817095] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:21.754 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.754 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.754 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:21.754 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.754 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:21.754 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.754 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:21.754 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.755 request: 00:24:21.755 { 00:24:21.755 "name": "nvme", 00:24:21.755 "trtype": "tcp", 00:24:21.755 "traddr": "10.0.0.2", 00:24:21.755 "adrfam": "ipv4", 00:24:21.755 "trsvcid": "8009", 00:24:21.755 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:21.755 "wait_for_attach": true, 00:24:21.755 "method": "bdev_nvme_start_discovery", 00:24:21.755 "req_id": 1 00:24:21.755 } 00:24:21.755 Got JSON-RPC error response 00:24:21.755 response: 00:24:21.755 { 00:24:21.755 "code": -17, 00:24:21.755 "message": "File exists" 00:24:21.755 } 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.755 request: 00:24:21.755 { 00:24:21.755 "name": "nvme_second", 00:24:21.755 "trtype": "tcp", 00:24:21.755 "traddr": "10.0.0.2", 00:24:21.755 "adrfam": "ipv4", 00:24:21.755 "trsvcid": "8009", 00:24:21.755 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:21.755 "wait_for_attach": true, 00:24:21.755 "method": "bdev_nvme_start_discovery", 00:24:21.755 "req_id": 1 00:24:21.755 } 00:24:21.755 Got JSON-RPC error response 00:24:21.755 response: 00:24:21.755 { 00:24:21.755 "code": -17, 00:24:21.755 "message": "File exists" 00:24:21.755 } 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:21.755 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.014 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:22.014 19:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.014 19:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.950 [2024-07-24 19:25:09.056595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.950 [2024-07-24 19:25:09.056622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15af0d0 with addr=10.0.0.2, port=8010 00:24:22.950 [2024-07-24 19:25:09.056637] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:22.950 [2024-07-24 19:25:09.056646] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:22.950 [2024-07-24 19:25:09.056653] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:23.888 [2024-07-24 19:25:10.059023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.888 [2024-07-24 19:25:10.059056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15af0d0 with addr=10.0.0.2, port=8010 00:24:23.888 [2024-07-24 19:25:10.059074] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:23.888 [2024-07-24 19:25:10.059083] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:23.888 [2024-07-24 19:25:10.059092] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:24.825 [2024-07-24 19:25:11.061118] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:24.825 request: 00:24:24.825 { 00:24:24.825 "name": "nvme_second", 00:24:24.825 "trtype": "tcp", 00:24:24.825 "traddr": "10.0.0.2", 00:24:24.825 "adrfam": "ipv4", 00:24:24.825 "trsvcid": "8010", 00:24:25.085 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:25.085 "wait_for_attach": false, 00:24:25.085 "attach_timeout_ms": 3000, 00:24:25.085 "method": "bdev_nvme_start_discovery", 00:24:25.085 "req_id": 1 00:24:25.085 } 00:24:25.085 Got JSON-RPC error response 00:24:25.085 response: 00:24:25.085 { 00:24:25.085 "code": -110, 00:24:25.085 "message": "Connection timed out" 00:24:25.085 } 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1631117 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.085 rmmod nvme_tcp 00:24:25.085 rmmod nvme_fabrics 00:24:25.085 rmmod nvme_keyring 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1630865 ']' 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1630865 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1630865 ']' 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1630865 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1630865 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1630865' 00:24:25.085 killing process with pid 1630865 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1630865 00:24:25.085 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1630865 00:24:25.345 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:25.345 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:25.345 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:25.345 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:25.345 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:25.345 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.345 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.345 19:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:27.882 00:24:27.882 real 0m19.314s 00:24:27.882 user 0m22.573s 00:24:27.882 sys 0m7.182s 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.882 ************************************ 00:24:27.882 END TEST nvmf_host_discovery 00:24:27.882 ************************************ 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.882 ************************************ 00:24:27.882 START TEST nvmf_host_multipath_status 00:24:27.882 ************************************ 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:27.882 * Looking for test storage... 00:24:27.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:27.882 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.883 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:27.883 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:27.883 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:27.883 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.883 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.883 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.883 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:27.883 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:27.883 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:27.883 19:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.452 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.452 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.453 19:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:34.453 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:34.453 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:34.453 Found net devices under 0000:af:00.0: cvl_0_0 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:34.453 Found net devices under 0000:af:00.1: cvl_0_1 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:34.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:24:34.453 00:24:34.453 --- 10.0.0.2 ping statistics --- 00:24:34.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.453 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:24:34.453 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:24:34.453 00:24:34.454 --- 10.0.0.1 ping statistics --- 00:24:34.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.454 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1636324 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1636324 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1636324 ']' 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:34.454 19:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.454 [2024-07-24 19:25:20.386062] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:24:34.454 [2024-07-24 19:25:20.386114] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.454 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.454 [2024-07-24 19:25:20.460349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:34.454 [2024-07-24 19:25:20.534289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.454 [2024-07-24 19:25:20.534327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.454 [2024-07-24 19:25:20.534337] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.454 [2024-07-24 19:25:20.534345] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.454 [2024-07-24 19:25:20.534353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.454 [2024-07-24 19:25:20.534394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.454 [2024-07-24 19:25:20.534397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.021 19:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:35.021 19:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:35.021 19:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.021 19:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:35.021 19:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:35.021 19:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.021 19:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1636324 00:24:35.021 19:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:35.279 [2024-07-24 19:25:21.377656] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.279 19:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:35.537 Malloc0 00:24:35.538 19:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:35.538 19:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.796 19:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.054 [2024-07-24 19:25:22.062071] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.054 19:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:36.054 [2024-07-24 19:25:22.234506] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:36.054 19:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1636654 00:24:36.054 19:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:36.054 19:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:36.054 19:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1636654 /var/tmp/bdevperf.sock 00:24:36.054 19:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1636654 ']' 00:24:36.054 19:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.054 19:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:36.054 19:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.055 19:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:36.055 19:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:36.991 19:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.991 19:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:36.991 19:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:37.250 19:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:37.508 Nvme0n1 00:24:37.508 19:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:38.074 Nvme0n1 00:24:38.074 19:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:38.074 19:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:39.976 19:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:39.976 19:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:40.234 19:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:40.234 19:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:41.171 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:41.171 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:41.171 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.171 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:41.429 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.429 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:41.429 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.429 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:41.756 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.756 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:41.756 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.756 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:42.015 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.015 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:42.015 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:42.015 19:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.015 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.015 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:42.015 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.015 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:42.276 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.276 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:42.276 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.276 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:42.276 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.276 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:42.276 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:42.535 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:42.794 19:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:43.732 19:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:43.732 19:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:43.732 19:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:43.732 19:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.991 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.991 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:43.991 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.991 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.250 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.250 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.250 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.250 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:44.250 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.250 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:44.250 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.250 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:44.509 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.509 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:44.509 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:44.509 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.769 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.769 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:44.769 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.769 19:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.028 19:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.028 19:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:45.028 19:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:45.028 19:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:45.287 19:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:46.225 19:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:46.225 19:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:46.225 19:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.225 19:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:46.484 19:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.484 19:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:46.484 19:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.484 19:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:46.744 19:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.744 19:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:46.744 19:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:46.744 19:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.003 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.003 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:47.003 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.003 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:47.003 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.003 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:47.003 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.003 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:47.262 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.262 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:47.262 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.262 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:47.522 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.522 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:47.522 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:47.522 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:47.781 19:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:48.718 19:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:48.718 19:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:48.718 19:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.718 19:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:48.977 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.977 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:48.977 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.977 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:49.236 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.237 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:49.237 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.237 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.496 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.496 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:49.496 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.496 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:49.496 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.496 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:49.496 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.496 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:49.755 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.755 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:49.755 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.755 19:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:50.014 19:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:50.014 19:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:50.014 19:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:50.014 19:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:50.273 19:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:51.210 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:51.210 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:51.210 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.210 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:51.469 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.469 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:51.469 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:51.469 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.727 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.727 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:51.727 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.728 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.728 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.728 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:51.728 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.728 19:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:51.992 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.992 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:51.992 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.992 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.252 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.252 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:52.252 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.253 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.253 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.253 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:52.253 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:52.511 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:52.769 19:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:53.705 19:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:53.705 19:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:53.705 19:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.705 19:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:53.964 19:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:53.964 19:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:53.964 19:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.964 19:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:53.964 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.964 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:53.964 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.964 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:54.223 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.223 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:54.223 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.223 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.483 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.483 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:54.483 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:54.483 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.742 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.742 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:54.742 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.742 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:54.742 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.742 19:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:55.002 19:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:55.002 19:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:55.261 19:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:55.520 19:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:56.468 19:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:56.468 19:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:56.468 19:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.468 19:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:56.468 19:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.468 19:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:56.771 19:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.771 19:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:56.771 19:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.771 19:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:56.771 19:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.771 19:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:57.030 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.030 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:57.030 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.030 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.030 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.030 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:57.030 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.030 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.290 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.290 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:57.290 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:57.290 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.549 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.549 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:57.549 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:57.808 19:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:57.808 19:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:59.186 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:59.186 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:59.186 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.186 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:59.186 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.186 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:59.186 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:59.186 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.186 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.186 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:59.186 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.186 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:59.445 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.445 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:59.445 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:59.445 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.705 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.705 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:59.705 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.705 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:59.964 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.964 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:59.964 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.964 19:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:59.964 19:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.964 19:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:59.964 19:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:00.223 19:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:00.482 19:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:01.420 19:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:01.420 19:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:01.420 19:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.420 19:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:01.679 19:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.679 19:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:01.679 19:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.679 19:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:01.679 19:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.679 19:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:01.680 19:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.680 19:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:01.939 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.939 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:01.939 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:01.939 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.198 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.198 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:02.198 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:02.198 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.458 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.458 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:02.458 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.458 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:02.458 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.458 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:02.458 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:02.717 19:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:02.977 19:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:03.914 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:03.914 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:03.914 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.914 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:04.174 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.174 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:04.174 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.174 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:04.433 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:04.433 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:04.433 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.433 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:04.433 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.433 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:04.433 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.433 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:04.692 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.692 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:04.692 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.692 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:04.952 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.952 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:04.952 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.952 19:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:04.952 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:04.952 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1636654 00:25:04.952 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1636654 ']' 00:25:04.952 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1636654 00:25:04.952 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:04.952 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:04.952 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1636654 00:25:05.214 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:05.214 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:05.214 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1636654' 00:25:05.214 killing process with pid 1636654 00:25:05.214 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1636654 00:25:05.214 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1636654 00:25:05.214 Connection closed with partial response: 00:25:05.214 00:25:05.214 00:25:05.214 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1636654 00:25:05.214 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:05.214 [2024-07-24 19:25:22.299539] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:25:05.214 [2024-07-24 19:25:22.299599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636654 ] 00:25:05.214 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.214 [2024-07-24 19:25:22.367140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.214 [2024-07-24 19:25:22.437815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.214 Running I/O for 90 seconds... 00:25:05.214 [2024-07-24 19:25:36.206987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-24 19:25:36.207027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:05.214 [2024-07-24 19:25:36.207062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.214 [2024-07-24 19:25:36.207073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:05.214 [2024-07-24 19:25:36.207089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.214 [2024-07-24 19:25:36.207098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:05.214 [2024-07-24 19:25:36.207113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.214 [2024-07-24 19:25:36.207122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:05.214 [2024-07-24 19:25:36.207137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.214 [2024-07-24 19:25:36.207146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:05.214 [2024-07-24 19:25:36.207160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.214 [2024-07-24 19:25:36.207170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:05.214 [2024-07-24 19:25:36.207184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.214 [2024-07-24 19:25:36.207193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.207778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.207798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.208788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.208798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:05.215 [2024-07-24 19:25:36.209447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.215 [2024-07-24 19:25:36.209457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.209974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.209993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.216 [2024-07-24 19:25:36.210369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.216 [2024-07-24 19:25:36.210397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.216 [2024-07-24 19:25:36.210425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:05.216 [2024-07-24 19:25:36.210443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.217 [2024-07-24 19:25:36.210453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.217 [2024-07-24 19:25:36.210481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.217 [2024-07-24 19:25:36.210509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.217 [2024-07-24 19:25:36.210537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.217 [2024-07-24 19:25:36.210565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.210983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.210992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.211011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.211020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.211128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.211139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.211161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.211171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.211192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.211202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.211224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.211233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.211255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.211264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.211285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.211294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.211317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.211326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.211347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.211357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.211379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.211388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.211411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.211422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:36.211444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:36.211453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:49.028070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:49.028112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:49.028148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:49.028159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:49.028174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:49.028183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:05.217 [2024-07-24 19:25:49.028198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.217 [2024-07-24 19:25:49.028207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.028552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.028568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.218 [2024-07-24 19:25:49.028578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.029976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.029991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.030001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.030015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.030024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.030039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.030049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.030064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.030073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.030087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.030097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:05.218 [2024-07-24 19:25:49.030113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.218 [2024-07-24 19:25:49.030122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.219 [2024-07-24 19:25:49.030148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.219 [2024-07-24 19:25:49.030172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.219 [2024-07-24 19:25:49.030197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.219 [2024-07-24 19:25:49.030222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.219 [2024-07-24 19:25:49.030246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.030801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.030828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.030853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.030877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.030901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.030928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.030955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.030979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.030994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.031004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.031019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.031028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.031043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.031052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.031067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.031077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.031092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.031101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.031115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.031125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.031139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.031148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.031163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.219 [2024-07-24 19:25:49.031173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.031188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.219 [2024-07-24 19:25:49.031197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.031211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.219 [2024-07-24 19:25:49.031221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.031235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.219 [2024-07-24 19:25:49.031246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:05.219 [2024-07-24 19:25:49.031261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.219 [2024-07-24 19:25:49.031270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:05.219 Received shutdown signal, test time was about 27.031345 seconds 00:25:05.219 00:25:05.219 Latency(us) 00:25:05.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.219 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:05.219 Verification LBA range: start 0x0 length 0x4000 00:25:05.219 Nvme0n1 : 27.03 11038.18 43.12 0.00 0.00 11576.16 537.40 3019898.88 00:25:05.219 =================================================================================================================== 00:25:05.219 Total : 11038.18 43.12 0.00 0.00 11576.16 537.40 3019898.88 00:25:05.219 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:05.479 rmmod nvme_tcp 00:25:05.479 rmmod nvme_fabrics 00:25:05.479 rmmod nvme_keyring 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1636324 ']' 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1636324 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1636324 ']' 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1636324 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:05.479 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1636324 00:25:05.480 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:05.480 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:05.480 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1636324' 00:25:05.480 killing process with pid 1636324 00:25:05.480 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1636324 00:25:05.480 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1636324 00:25:05.739 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:05.739 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:05.739 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:05.739 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:05.739 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:05.739 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.739 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.739 19:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.276 19:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:08.276 00:25:08.276 real 0m40.376s 00:25:08.276 user 1m43.149s 00:25:08.276 sys 0m14.475s 00:25:08.276 19:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:08.276 19:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:08.276 ************************************ 00:25:08.276 END TEST nvmf_host_multipath_status 00:25:08.276 ************************************ 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.276 ************************************ 00:25:08.276 START TEST nvmf_discovery_remove_ifc 00:25:08.276 ************************************ 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:08.276 * Looking for test storage... 00:25:08.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.276 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:08.277 19:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:14.846 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:14.847 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:14.847 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:14.847 Found net devices under 0000:af:00.0: cvl_0_0 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:14.847 Found net devices under 0000:af:00.1: cvl_0_1 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:14.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:25:14.847 00:25:14.847 --- 10.0.0.2 ping statistics --- 00:25:14.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.847 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:25:14.847 00:25:14.847 --- 10.0.0.1 ping statistics --- 00:25:14.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.847 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:14.847 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.848 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1645468 00:25:14.848 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:14.848 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1645468 00:25:14.848 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1645468 ']' 00:25:14.848 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.848 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:14.848 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.848 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:14.848 19:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.848 [2024-07-24 19:26:01.024300] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:25:14.848 [2024-07-24 19:26:01.024351] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.848 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.105 [2024-07-24 19:26:01.097297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.105 [2024-07-24 19:26:01.163636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.105 [2024-07-24 19:26:01.163679] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.105 [2024-07-24 19:26:01.163689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.105 [2024-07-24 19:26:01.163697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.106 [2024-07-24 19:26:01.163704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.106 [2024-07-24 19:26:01.163738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.672 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:15.672 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:15.672 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:15.672 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.672 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.672 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.672 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:15.672 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.672 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.672 [2024-07-24 19:26:01.873640] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.672 [2024-07-24 19:26:01.881822] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:15.672 null0 00:25:15.984 [2024-07-24 19:26:01.913804] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.984 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.984 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1645525 00:25:15.984 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1645525 /tmp/host.sock 00:25:15.984 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1645525 ']' 00:25:15.984 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:15.984 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:15.984 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:15.984 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:15.984 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:15.984 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.984 19:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:15.984 [2024-07-24 19:26:01.984647] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:25:15.984 [2024-07-24 19:26:01.984692] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645525 ] 00:25:15.984 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.984 [2024-07-24 19:26:02.054473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.984 [2024-07-24 19:26:02.126985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.552 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:16.552 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:16.552 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.552 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:16.552 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.552 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.552 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.552 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:16.552 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.552 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.812 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.812 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:16.812 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.812 19:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.750 [2024-07-24 19:26:03.902291] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:17.750 [2024-07-24 19:26:03.902311] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:17.750 [2024-07-24 19:26:03.902324] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:18.010 [2024-07-24 19:26:04.030725] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:18.010 [2024-07-24 19:26:04.093819] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:18.010 [2024-07-24 19:26:04.093862] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:18.010 [2024-07-24 19:26:04.093883] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:18.010 [2024-07-24 19:26:04.093896] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:18.010 [2024-07-24 19:26:04.093915] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.010 [2024-07-24 19:26:04.101295] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21f7d40 was disconnected and freed. delete nvme_qpair. 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:18.010 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:18.269 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:18.269 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.269 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.269 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.269 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.269 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.269 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.269 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.269 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.269 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:18.269 19:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.209 19:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.209 19:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.209 19:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.209 19:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.209 19:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.209 19:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.209 19:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.209 19:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.209 19:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:19.209 19:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:20.148 19:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.148 19:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.148 19:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.148 19:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.148 19:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.148 19:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.148 19:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.408 19:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.408 19:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:20.408 19:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:21.344 19:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:21.344 19:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.344 19:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:21.344 19:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:21.344 19:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.344 19:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.344 19:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:21.344 19:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.344 19:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:21.344 19:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.282 19:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:22.282 19:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.282 19:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:22.282 19:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.282 19:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.282 19:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:22.282 19:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:22.283 19:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.542 19:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:22.542 19:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:23.481 [2024-07-24 19:26:09.535084] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:23.481 [2024-07-24 19:26:09.535128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.481 [2024-07-24 19:26:09.535145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.481 [2024-07-24 19:26:09.535156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.481 [2024-07-24 19:26:09.535165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.481 [2024-07-24 19:26:09.535175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.481 [2024-07-24 19:26:09.535184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.481 [2024-07-24 19:26:09.535193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.481 [2024-07-24 19:26:09.535202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.481 [2024-07-24 19:26:09.535213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.481 [2024-07-24 19:26:09.535222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.481 [2024-07-24 19:26:09.535231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21be740 is same with the state(5) to be set 00:25:23.481 [2024-07-24 19:26:09.545105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21be740 (9): Bad file descriptor 00:25:23.481 19:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:23.481 19:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.481 19:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:23.481 19:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:23.481 19:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.481 19:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.481 19:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:23.481 [2024-07-24 19:26:09.555142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:24.421 [2024-07-24 19:26:10.583740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:24.421 [2024-07-24 19:26:10.583810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21be740 with addr=10.0.0.2, port=4420 00:25:24.421 [2024-07-24 19:26:10.583830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21be740 is same with the state(5) to be set 00:25:24.421 [2024-07-24 19:26:10.583868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21be740 (9): Bad file descriptor 00:25:24.421 [2024-07-24 19:26:10.584282] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:24.421 [2024-07-24 19:26:10.584314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:24.421 [2024-07-24 19:26:10.584327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:24.421 [2024-07-24 19:26:10.584341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:24.421 [2024-07-24 19:26:10.584364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:24.421 [2024-07-24 19:26:10.584377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:24.421 19:26:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.421 19:26:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:24.421 19:26:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.359 [2024-07-24 19:26:11.586852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:25.359 [2024-07-24 19:26:11.586875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:25.359 [2024-07-24 19:26:11.586885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:25.359 [2024-07-24 19:26:11.586895] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:25.359 [2024-07-24 19:26:11.586908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:25.359 [2024-07-24 19:26:11.586927] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:25.359 [2024-07-24 19:26:11.586949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.360 [2024-07-24 19:26:11.586961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.360 [2024-07-24 19:26:11.586972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.360 [2024-07-24 19:26:11.586982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.360 [2024-07-24 19:26:11.586992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.360 [2024-07-24 19:26:11.587001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.360 [2024-07-24 19:26:11.587011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.360 [2024-07-24 19:26:11.587020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.360 [2024-07-24 19:26:11.587030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.360 [2024-07-24 19:26:11.587039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.360 [2024-07-24 19:26:11.587049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:25.360 [2024-07-24 19:26:11.587073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bdba0 (9): Bad file descriptor 00:25:25.360 [2024-07-24 19:26:11.588073] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:25.360 [2024-07-24 19:26:11.588085] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:25.619 19:26:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:26.999 19:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:26.999 19:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.999 19:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:26.999 19:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.999 19:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:26.999 19:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.999 19:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:26.999 19:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.999 19:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:26.999 19:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:27.569 [2024-07-24 19:26:13.596679] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:27.569 [2024-07-24 19:26:13.596697] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:27.569 [2024-07-24 19:26:13.596711] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:27.569 [2024-07-24 19:26:13.683968] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:27.569 [2024-07-24 19:26:13.788425] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:27.569 [2024-07-24 19:26:13.788457] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:27.569 [2024-07-24 19:26:13.788476] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:27.569 [2024-07-24 19:26:13.788491] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:27.569 [2024-07-24 19:26:13.788499] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:27.569 [2024-07-24 19:26:13.795978] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21ad110 was disconnected and freed. delete nvme_qpair. 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1645525 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1645525 ']' 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1645525 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1645525 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1645525' 00:25:27.828 killing process with pid 1645525 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1645525 00:25:27.828 19:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1645525 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.087 rmmod nvme_tcp 00:25:28.087 rmmod nvme_fabrics 00:25:28.087 rmmod nvme_keyring 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1645468 ']' 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1645468 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1645468 ']' 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1645468 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1645468 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1645468' 00:25:28.087 killing process with pid 1645468 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1645468 00:25:28.087 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1645468 00:25:28.347 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:28.347 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:28.347 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:28.347 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.347 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:28.347 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.347 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.347 19:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:30.886 00:25:30.886 real 0m22.457s 00:25:30.886 user 0m26.152s 00:25:30.886 sys 0m7.198s 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.886 ************************************ 00:25:30.886 END TEST nvmf_discovery_remove_ifc 00:25:30.886 ************************************ 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.886 ************************************ 00:25:30.886 START TEST nvmf_identify_kernel_target 00:25:30.886 ************************************ 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:30.886 * Looking for test storage... 00:25:30.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.886 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:30.887 19:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:37.533 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:37.533 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:37.533 Found net devices under 0000:af:00.0: cvl_0_0 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:37.533 Found net devices under 0000:af:00.1: cvl_0_1 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.533 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:37.534 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:37.534 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.534 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.534 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.534 19:26:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:37.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:25:37.534 00:25:37.534 --- 10.0.0.2 ping statistics --- 00:25:37.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.534 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:25:37.534 00:25:37.534 --- 10.0.0.1 ping statistics --- 00:25:37.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.534 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:37.534 19:26:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:40.829 Waiting for block devices as requested 00:25:40.829 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:40.829 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:40.829 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:40.829 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:40.829 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:40.829 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:40.829 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:41.089 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:41.089 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:41.089 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:41.348 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:41.348 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:41.348 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:41.608 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:41.608 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:41.608 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:41.868 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:25:41.868 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:41.868 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:41.868 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:41.868 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:41.869 No valid GPT data, bailing 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:41.869 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:25:42.130 00:25:42.130 Discovery Log Number of Records 2, Generation counter 2 00:25:42.130 =====Discovery Log Entry 0====== 00:25:42.130 trtype: tcp 00:25:42.130 adrfam: ipv4 00:25:42.130 subtype: current discovery subsystem 00:25:42.130 treq: not specified, sq flow control disable supported 00:25:42.130 portid: 1 00:25:42.130 trsvcid: 4420 00:25:42.130 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:42.130 traddr: 10.0.0.1 00:25:42.130 eflags: none 00:25:42.130 sectype: none 00:25:42.130 =====Discovery Log Entry 1====== 00:25:42.130 trtype: tcp 00:25:42.130 adrfam: ipv4 00:25:42.130 subtype: nvme subsystem 00:25:42.130 treq: not specified, sq flow control disable supported 00:25:42.130 portid: 1 00:25:42.130 trsvcid: 4420 00:25:42.130 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:42.130 traddr: 10.0.0.1 00:25:42.130 eflags: none 00:25:42.130 sectype: none 00:25:42.131 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:42.131 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:42.131 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.131 ===================================================== 00:25:42.131 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:42.131 ===================================================== 00:25:42.131 Controller Capabilities/Features 00:25:42.131 ================================ 00:25:42.131 Vendor ID: 0000 00:25:42.131 Subsystem Vendor ID: 0000 00:25:42.131 Serial Number: edb3984eafda6ab7b529 00:25:42.131 Model Number: Linux 00:25:42.131 Firmware Version: 6.7.0-68 00:25:42.131 Recommended Arb Burst: 0 00:25:42.131 IEEE OUI Identifier: 00 00 00 00:25:42.131 Multi-path I/O 00:25:42.131 May have multiple subsystem ports: No 00:25:42.131 May have multiple controllers: No 00:25:42.131 Associated with SR-IOV VF: No 00:25:42.131 Max Data Transfer Size: Unlimited 00:25:42.131 Max Number of Namespaces: 0 00:25:42.131 Max Number of I/O Queues: 1024 00:25:42.131 NVMe Specification Version (VS): 1.3 00:25:42.131 NVMe Specification Version (Identify): 1.3 00:25:42.131 Maximum Queue Entries: 1024 00:25:42.131 Contiguous Queues Required: No 00:25:42.131 Arbitration Mechanisms Supported 00:25:42.131 Weighted Round Robin: Not Supported 00:25:42.131 Vendor Specific: Not Supported 00:25:42.131 Reset Timeout: 7500 ms 00:25:42.131 Doorbell Stride: 4 bytes 00:25:42.131 NVM Subsystem Reset: Not Supported 00:25:42.131 Command Sets Supported 00:25:42.131 NVM Command Set: Supported 00:25:42.131 Boot Partition: Not Supported 00:25:42.131 Memory Page Size Minimum: 4096 bytes 00:25:42.131 Memory Page Size Maximum: 4096 bytes 00:25:42.131 Persistent Memory Region: Not Supported 00:25:42.131 Optional Asynchronous Events Supported 00:25:42.131 Namespace Attribute Notices: Not Supported 00:25:42.131 Firmware Activation Notices: Not Supported 00:25:42.131 ANA Change Notices: Not Supported 00:25:42.131 PLE Aggregate Log Change Notices: Not Supported 00:25:42.131 LBA Status Info Alert Notices: Not Supported 00:25:42.131 EGE Aggregate Log Change Notices: Not Supported 00:25:42.131 Normal NVM Subsystem Shutdown event: Not Supported 00:25:42.131 Zone Descriptor Change Notices: Not Supported 00:25:42.131 Discovery Log Change Notices: Supported 00:25:42.131 Controller Attributes 00:25:42.131 128-bit Host Identifier: Not Supported 00:25:42.131 Non-Operational Permissive Mode: Not Supported 00:25:42.131 NVM Sets: Not Supported 00:25:42.131 Read Recovery Levels: Not Supported 00:25:42.131 Endurance Groups: Not Supported 00:25:42.131 Predictable Latency Mode: Not Supported 00:25:42.131 Traffic Based Keep ALive: Not Supported 00:25:42.131 Namespace Granularity: Not Supported 00:25:42.131 SQ Associations: Not Supported 00:25:42.131 UUID List: Not Supported 00:25:42.131 Multi-Domain Subsystem: Not Supported 00:25:42.131 Fixed Capacity Management: Not Supported 00:25:42.131 Variable Capacity Management: Not Supported 00:25:42.131 Delete Endurance Group: Not Supported 00:25:42.131 Delete NVM Set: Not Supported 00:25:42.131 Extended LBA Formats Supported: Not Supported 00:25:42.131 Flexible Data Placement Supported: Not Supported 00:25:42.131 00:25:42.131 Controller Memory Buffer Support 00:25:42.131 ================================ 00:25:42.131 Supported: No 00:25:42.131 00:25:42.131 Persistent Memory Region Support 00:25:42.131 ================================ 00:25:42.131 Supported: No 00:25:42.131 00:25:42.131 Admin Command Set Attributes 00:25:42.131 ============================ 00:25:42.131 Security Send/Receive: Not Supported 00:25:42.131 Format NVM: Not Supported 00:25:42.131 Firmware Activate/Download: Not Supported 00:25:42.131 Namespace Management: Not Supported 00:25:42.131 Device Self-Test: Not Supported 00:25:42.131 Directives: Not Supported 00:25:42.131 NVMe-MI: Not Supported 00:25:42.131 Virtualization Management: Not Supported 00:25:42.131 Doorbell Buffer Config: Not Supported 00:25:42.131 Get LBA Status Capability: Not Supported 00:25:42.131 Command & Feature Lockdown Capability: Not Supported 00:25:42.131 Abort Command Limit: 1 00:25:42.131 Async Event Request Limit: 1 00:25:42.131 Number of Firmware Slots: N/A 00:25:42.131 Firmware Slot 1 Read-Only: N/A 00:25:42.131 Firmware Activation Without Reset: N/A 00:25:42.131 Multiple Update Detection Support: N/A 00:25:42.131 Firmware Update Granularity: No Information Provided 00:25:42.131 Per-Namespace SMART Log: No 00:25:42.131 Asymmetric Namespace Access Log Page: Not Supported 00:25:42.131 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:42.131 Command Effects Log Page: Not Supported 00:25:42.131 Get Log Page Extended Data: Supported 00:25:42.131 Telemetry Log Pages: Not Supported 00:25:42.131 Persistent Event Log Pages: Not Supported 00:25:42.131 Supported Log Pages Log Page: May Support 00:25:42.131 Commands Supported & Effects Log Page: Not Supported 00:25:42.131 Feature Identifiers & Effects Log Page:May Support 00:25:42.131 NVMe-MI Commands & Effects Log Page: May Support 00:25:42.131 Data Area 4 for Telemetry Log: Not Supported 00:25:42.131 Error Log Page Entries Supported: 1 00:25:42.131 Keep Alive: Not Supported 00:25:42.131 00:25:42.131 NVM Command Set Attributes 00:25:42.131 ========================== 00:25:42.131 Submission Queue Entry Size 00:25:42.131 Max: 1 00:25:42.131 Min: 1 00:25:42.131 Completion Queue Entry Size 00:25:42.131 Max: 1 00:25:42.131 Min: 1 00:25:42.131 Number of Namespaces: 0 00:25:42.131 Compare Command: Not Supported 00:25:42.131 Write Uncorrectable Command: Not Supported 00:25:42.131 Dataset Management Command: Not Supported 00:25:42.131 Write Zeroes Command: Not Supported 00:25:42.131 Set Features Save Field: Not Supported 00:25:42.131 Reservations: Not Supported 00:25:42.131 Timestamp: Not Supported 00:25:42.131 Copy: Not Supported 00:25:42.131 Volatile Write Cache: Not Present 00:25:42.131 Atomic Write Unit (Normal): 1 00:25:42.131 Atomic Write Unit (PFail): 1 00:25:42.131 Atomic Compare & Write Unit: 1 00:25:42.131 Fused Compare & Write: Not Supported 00:25:42.131 Scatter-Gather List 00:25:42.131 SGL Command Set: Supported 00:25:42.131 SGL Keyed: Not Supported 00:25:42.131 SGL Bit Bucket Descriptor: Not Supported 00:25:42.131 SGL Metadata Pointer: Not Supported 00:25:42.131 Oversized SGL: Not Supported 00:25:42.131 SGL Metadata Address: Not Supported 00:25:42.131 SGL Offset: Supported 00:25:42.131 Transport SGL Data Block: Not Supported 00:25:42.131 Replay Protected Memory Block: Not Supported 00:25:42.131 00:25:42.131 Firmware Slot Information 00:25:42.131 ========================= 00:25:42.131 Active slot: 0 00:25:42.131 00:25:42.131 00:25:42.132 Error Log 00:25:42.132 ========= 00:25:42.132 00:25:42.132 Active Namespaces 00:25:42.132 ================= 00:25:42.132 Discovery Log Page 00:25:42.132 ================== 00:25:42.132 Generation Counter: 2 00:25:42.132 Number of Records: 2 00:25:42.132 Record Format: 0 00:25:42.132 00:25:42.132 Discovery Log Entry 0 00:25:42.132 ---------------------- 00:25:42.132 Transport Type: 3 (TCP) 00:25:42.132 Address Family: 1 (IPv4) 00:25:42.132 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:42.132 Entry Flags: 00:25:42.132 Duplicate Returned Information: 0 00:25:42.132 Explicit Persistent Connection Support for Discovery: 0 00:25:42.132 Transport Requirements: 00:25:42.132 Secure Channel: Not Specified 00:25:42.132 Port ID: 1 (0x0001) 00:25:42.132 Controller ID: 65535 (0xffff) 00:25:42.132 Admin Max SQ Size: 32 00:25:42.132 Transport Service Identifier: 4420 00:25:42.132 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:42.132 Transport Address: 10.0.0.1 00:25:42.132 Discovery Log Entry 1 00:25:42.132 ---------------------- 00:25:42.132 Transport Type: 3 (TCP) 00:25:42.132 Address Family: 1 (IPv4) 00:25:42.132 Subsystem Type: 2 (NVM Subsystem) 00:25:42.132 Entry Flags: 00:25:42.132 Duplicate Returned Information: 0 00:25:42.132 Explicit Persistent Connection Support for Discovery: 0 00:25:42.132 Transport Requirements: 00:25:42.132 Secure Channel: Not Specified 00:25:42.132 Port ID: 1 (0x0001) 00:25:42.132 Controller ID: 65535 (0xffff) 00:25:42.132 Admin Max SQ Size: 32 00:25:42.132 Transport Service Identifier: 4420 00:25:42.132 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:42.132 Transport Address: 10.0.0.1 00:25:42.132 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:42.132 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.132 get_feature(0x01) failed 00:25:42.132 get_feature(0x02) failed 00:25:42.132 get_feature(0x04) failed 00:25:42.132 ===================================================== 00:25:42.132 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:42.132 ===================================================== 00:25:42.132 Controller Capabilities/Features 00:25:42.132 ================================ 00:25:42.132 Vendor ID: 0000 00:25:42.132 Subsystem Vendor ID: 0000 00:25:42.132 Serial Number: df81858bd288564af9f4 00:25:42.132 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:42.132 Firmware Version: 6.7.0-68 00:25:42.132 Recommended Arb Burst: 6 00:25:42.132 IEEE OUI Identifier: 00 00 00 00:25:42.132 Multi-path I/O 00:25:42.132 May have multiple subsystem ports: Yes 00:25:42.132 May have multiple controllers: Yes 00:25:42.132 Associated with SR-IOV VF: No 00:25:42.132 Max Data Transfer Size: Unlimited 00:25:42.132 Max Number of Namespaces: 1024 00:25:42.132 Max Number of I/O Queues: 128 00:25:42.132 NVMe Specification Version (VS): 1.3 00:25:42.132 NVMe Specification Version (Identify): 1.3 00:25:42.132 Maximum Queue Entries: 1024 00:25:42.132 Contiguous Queues Required: No 00:25:42.132 Arbitration Mechanisms Supported 00:25:42.132 Weighted Round Robin: Not Supported 00:25:42.132 Vendor Specific: Not Supported 00:25:42.132 Reset Timeout: 7500 ms 00:25:42.132 Doorbell Stride: 4 bytes 00:25:42.132 NVM Subsystem Reset: Not Supported 00:25:42.132 Command Sets Supported 00:25:42.132 NVM Command Set: Supported 00:25:42.132 Boot Partition: Not Supported 00:25:42.132 Memory Page Size Minimum: 4096 bytes 00:25:42.132 Memory Page Size Maximum: 4096 bytes 00:25:42.132 Persistent Memory Region: Not Supported 00:25:42.132 Optional Asynchronous Events Supported 00:25:42.132 Namespace Attribute Notices: Supported 00:25:42.132 Firmware Activation Notices: Not Supported 00:25:42.132 ANA Change Notices: Supported 00:25:42.132 PLE Aggregate Log Change Notices: Not Supported 00:25:42.132 LBA Status Info Alert Notices: Not Supported 00:25:42.132 EGE Aggregate Log Change Notices: Not Supported 00:25:42.132 Normal NVM Subsystem Shutdown event: Not Supported 00:25:42.132 Zone Descriptor Change Notices: Not Supported 00:25:42.132 Discovery Log Change Notices: Not Supported 00:25:42.132 Controller Attributes 00:25:42.132 128-bit Host Identifier: Supported 00:25:42.132 Non-Operational Permissive Mode: Not Supported 00:25:42.132 NVM Sets: Not Supported 00:25:42.132 Read Recovery Levels: Not Supported 00:25:42.132 Endurance Groups: Not Supported 00:25:42.132 Predictable Latency Mode: Not Supported 00:25:42.132 Traffic Based Keep ALive: Supported 00:25:42.132 Namespace Granularity: Not Supported 00:25:42.132 SQ Associations: Not Supported 00:25:42.132 UUID List: Not Supported 00:25:42.132 Multi-Domain Subsystem: Not Supported 00:25:42.132 Fixed Capacity Management: Not Supported 00:25:42.132 Variable Capacity Management: Not Supported 00:25:42.132 Delete Endurance Group: Not Supported 00:25:42.132 Delete NVM Set: Not Supported 00:25:42.132 Extended LBA Formats Supported: Not Supported 00:25:42.132 Flexible Data Placement Supported: Not Supported 00:25:42.132 00:25:42.132 Controller Memory Buffer Support 00:25:42.132 ================================ 00:25:42.132 Supported: No 00:25:42.132 00:25:42.132 Persistent Memory Region Support 00:25:42.132 ================================ 00:25:42.132 Supported: No 00:25:42.132 00:25:42.132 Admin Command Set Attributes 00:25:42.132 ============================ 00:25:42.132 Security Send/Receive: Not Supported 00:25:42.132 Format NVM: Not Supported 00:25:42.132 Firmware Activate/Download: Not Supported 00:25:42.132 Namespace Management: Not Supported 00:25:42.132 Device Self-Test: Not Supported 00:25:42.132 Directives: Not Supported 00:25:42.132 NVMe-MI: Not Supported 00:25:42.132 Virtualization Management: Not Supported 00:25:42.132 Doorbell Buffer Config: Not Supported 00:25:42.132 Get LBA Status Capability: Not Supported 00:25:42.132 Command & Feature Lockdown Capability: Not Supported 00:25:42.132 Abort Command Limit: 4 00:25:42.132 Async Event Request Limit: 4 00:25:42.132 Number of Firmware Slots: N/A 00:25:42.132 Firmware Slot 1 Read-Only: N/A 00:25:42.132 Firmware Activation Without Reset: N/A 00:25:42.132 Multiple Update Detection Support: N/A 00:25:42.132 Firmware Update Granularity: No Information Provided 00:25:42.132 Per-Namespace SMART Log: Yes 00:25:42.132 Asymmetric Namespace Access Log Page: Supported 00:25:42.132 ANA Transition Time : 10 sec 00:25:42.132 00:25:42.132 Asymmetric Namespace Access Capabilities 00:25:42.132 ANA Optimized State : Supported 00:25:42.132 ANA Non-Optimized State : Supported 00:25:42.132 ANA Inaccessible State : Supported 00:25:42.133 ANA Persistent Loss State : Supported 00:25:42.133 ANA Change State : Supported 00:25:42.133 ANAGRPID is not changed : No 00:25:42.133 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:42.133 00:25:42.133 ANA Group Identifier Maximum : 128 00:25:42.133 Number of ANA Group Identifiers : 128 00:25:42.133 Max Number of Allowed Namespaces : 1024 00:25:42.133 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:42.133 Command Effects Log Page: Supported 00:25:42.133 Get Log Page Extended Data: Supported 00:25:42.133 Telemetry Log Pages: Not Supported 00:25:42.133 Persistent Event Log Pages: Not Supported 00:25:42.133 Supported Log Pages Log Page: May Support 00:25:42.133 Commands Supported & Effects Log Page: Not Supported 00:25:42.133 Feature Identifiers & Effects Log Page:May Support 00:25:42.133 NVMe-MI Commands & Effects Log Page: May Support 00:25:42.133 Data Area 4 for Telemetry Log: Not Supported 00:25:42.133 Error Log Page Entries Supported: 128 00:25:42.133 Keep Alive: Supported 00:25:42.133 Keep Alive Granularity: 1000 ms 00:25:42.133 00:25:42.133 NVM Command Set Attributes 00:25:42.133 ========================== 00:25:42.133 Submission Queue Entry Size 00:25:42.133 Max: 64 00:25:42.133 Min: 64 00:25:42.133 Completion Queue Entry Size 00:25:42.133 Max: 16 00:25:42.133 Min: 16 00:25:42.133 Number of Namespaces: 1024 00:25:42.133 Compare Command: Not Supported 00:25:42.133 Write Uncorrectable Command: Not Supported 00:25:42.133 Dataset Management Command: Supported 00:25:42.133 Write Zeroes Command: Supported 00:25:42.133 Set Features Save Field: Not Supported 00:25:42.133 Reservations: Not Supported 00:25:42.133 Timestamp: Not Supported 00:25:42.133 Copy: Not Supported 00:25:42.133 Volatile Write Cache: Present 00:25:42.133 Atomic Write Unit (Normal): 1 00:25:42.133 Atomic Write Unit (PFail): 1 00:25:42.133 Atomic Compare & Write Unit: 1 00:25:42.133 Fused Compare & Write: Not Supported 00:25:42.133 Scatter-Gather List 00:25:42.133 SGL Command Set: Supported 00:25:42.133 SGL Keyed: Not Supported 00:25:42.133 SGL Bit Bucket Descriptor: Not Supported 00:25:42.133 SGL Metadata Pointer: Not Supported 00:25:42.133 Oversized SGL: Not Supported 00:25:42.133 SGL Metadata Address: Not Supported 00:25:42.133 SGL Offset: Supported 00:25:42.133 Transport SGL Data Block: Not Supported 00:25:42.133 Replay Protected Memory Block: Not Supported 00:25:42.133 00:25:42.133 Firmware Slot Information 00:25:42.133 ========================= 00:25:42.133 Active slot: 0 00:25:42.133 00:25:42.133 Asymmetric Namespace Access 00:25:42.133 =========================== 00:25:42.133 Change Count : 0 00:25:42.133 Number of ANA Group Descriptors : 1 00:25:42.133 ANA Group Descriptor : 0 00:25:42.133 ANA Group ID : 1 00:25:42.133 Number of NSID Values : 1 00:25:42.133 Change Count : 0 00:25:42.133 ANA State : 1 00:25:42.133 Namespace Identifier : 1 00:25:42.133 00:25:42.133 Commands Supported and Effects 00:25:42.133 ============================== 00:25:42.133 Admin Commands 00:25:42.133 -------------- 00:25:42.133 Get Log Page (02h): Supported 00:25:42.133 Identify (06h): Supported 00:25:42.133 Abort (08h): Supported 00:25:42.133 Set Features (09h): Supported 00:25:42.133 Get Features (0Ah): Supported 00:25:42.133 Asynchronous Event Request (0Ch): Supported 00:25:42.133 Keep Alive (18h): Supported 00:25:42.133 I/O Commands 00:25:42.133 ------------ 00:25:42.133 Flush (00h): Supported 00:25:42.133 Write (01h): Supported LBA-Change 00:25:42.133 Read (02h): Supported 00:25:42.133 Write Zeroes (08h): Supported LBA-Change 00:25:42.133 Dataset Management (09h): Supported 00:25:42.133 00:25:42.133 Error Log 00:25:42.133 ========= 00:25:42.133 Entry: 0 00:25:42.133 Error Count: 0x3 00:25:42.133 Submission Queue Id: 0x0 00:25:42.133 Command Id: 0x5 00:25:42.133 Phase Bit: 0 00:25:42.133 Status Code: 0x2 00:25:42.133 Status Code Type: 0x0 00:25:42.133 Do Not Retry: 1 00:25:42.133 Error Location: 0x28 00:25:42.133 LBA: 0x0 00:25:42.133 Namespace: 0x0 00:25:42.133 Vendor Log Page: 0x0 00:25:42.133 ----------- 00:25:42.133 Entry: 1 00:25:42.133 Error Count: 0x2 00:25:42.133 Submission Queue Id: 0x0 00:25:42.133 Command Id: 0x5 00:25:42.133 Phase Bit: 0 00:25:42.133 Status Code: 0x2 00:25:42.133 Status Code Type: 0x0 00:25:42.133 Do Not Retry: 1 00:25:42.133 Error Location: 0x28 00:25:42.133 LBA: 0x0 00:25:42.133 Namespace: 0x0 00:25:42.133 Vendor Log Page: 0x0 00:25:42.133 ----------- 00:25:42.133 Entry: 2 00:25:42.133 Error Count: 0x1 00:25:42.133 Submission Queue Id: 0x0 00:25:42.133 Command Id: 0x4 00:25:42.133 Phase Bit: 0 00:25:42.133 Status Code: 0x2 00:25:42.133 Status Code Type: 0x0 00:25:42.133 Do Not Retry: 1 00:25:42.133 Error Location: 0x28 00:25:42.133 LBA: 0x0 00:25:42.133 Namespace: 0x0 00:25:42.133 Vendor Log Page: 0x0 00:25:42.133 00:25:42.133 Number of Queues 00:25:42.133 ================ 00:25:42.133 Number of I/O Submission Queues: 128 00:25:42.133 Number of I/O Completion Queues: 128 00:25:42.133 00:25:42.133 ZNS Specific Controller Data 00:25:42.133 ============================ 00:25:42.133 Zone Append Size Limit: 0 00:25:42.133 00:25:42.133 00:25:42.134 Active Namespaces 00:25:42.134 ================= 00:25:42.134 get_feature(0x05) failed 00:25:42.134 Namespace ID:1 00:25:42.134 Command Set Identifier: NVM (00h) 00:25:42.134 Deallocate: Supported 00:25:42.134 Deallocated/Unwritten Error: Not Supported 00:25:42.134 Deallocated Read Value: Unknown 00:25:42.134 Deallocate in Write Zeroes: Not Supported 00:25:42.134 Deallocated Guard Field: 0xFFFF 00:25:42.134 Flush: Supported 00:25:42.134 Reservation: Not Supported 00:25:42.134 Namespace Sharing Capabilities: Multiple Controllers 00:25:42.134 Size (in LBAs): 3125627568 (1490GiB) 00:25:42.134 Capacity (in LBAs): 3125627568 (1490GiB) 00:25:42.134 Utilization (in LBAs): 3125627568 (1490GiB) 00:25:42.134 UUID: 282637b0-d55e-43b2-bbcc-d62f9ae6df6d 00:25:42.134 Thin Provisioning: Not Supported 00:25:42.134 Per-NS Atomic Units: Yes 00:25:42.134 Atomic Boundary Size (Normal): 0 00:25:42.134 Atomic Boundary Size (PFail): 0 00:25:42.134 Atomic Boundary Offset: 0 00:25:42.134 NGUID/EUI64 Never Reused: No 00:25:42.134 ANA group ID: 1 00:25:42.134 Namespace Write Protected: No 00:25:42.134 Number of LBA Formats: 1 00:25:42.134 Current LBA Format: LBA Format #00 00:25:42.134 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:42.134 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.134 rmmod nvme_tcp 00:25:42.134 rmmod nvme_fabrics 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.134 19:26:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.673 19:26:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.673 19:26:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:44.673 19:26:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:44.673 19:26:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:44.673 19:26:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:44.673 19:26:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:44.673 19:26:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:44.673 19:26:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:44.673 19:26:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:44.673 19:26:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:44.673 19:26:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:47.209 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.209 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.209 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.209 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.209 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.209 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.209 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.209 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:47.209 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.209 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.209 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.467 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.467 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.467 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.467 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.467 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:48.844 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:25:49.103 00:25:49.103 real 0m18.549s 00:25:49.103 user 0m4.238s 00:25:49.103 sys 0m9.850s 00:25:49.103 19:26:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:49.103 19:26:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:49.103 ************************************ 00:25:49.103 END TEST nvmf_identify_kernel_target 00:25:49.103 ************************************ 00:25:49.103 19:26:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:49.103 19:26:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:49.103 19:26:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:49.103 19:26:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.103 ************************************ 00:25:49.103 START TEST nvmf_auth_host 00:25:49.103 ************************************ 00:25:49.103 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:49.103 * Looking for test storage... 00:25:49.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.362 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.363 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.363 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:49.363 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:49.363 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:49.363 19:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.935 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.935 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:55.935 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:55.936 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:55.936 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:55.936 Found net devices under 0000:af:00.0: cvl_0_0 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:55.936 Found net devices under 0000:af:00.1: cvl_0_1 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:55.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:25:55.936 00:25:55.936 --- 10.0.0.2 ping statistics --- 00:25:55.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.936 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:25:55.936 00:25:55.936 --- 10.0.0.1 ping statistics --- 00:25:55.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.936 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:55.936 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1657942 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1657942 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1657942 ']' 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.937 19:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:56.505 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:56.505 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:56.505 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:56.505 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:56.505 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.505 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.505 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4276d8dacbb53abe72205ca7a18d5f08 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5ju 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4276d8dacbb53abe72205ca7a18d5f08 0 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4276d8dacbb53abe72205ca7a18d5f08 0 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4276d8dacbb53abe72205ca7a18d5f08 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:56.506 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.765 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5ju 00:25:56.765 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5ju 00:25:56.765 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5ju 00:25:56.765 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:56.765 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.765 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.765 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.765 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:56.765 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:56.765 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:56.765 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e972ec8f1c56d25e5da01459f52b3e6eefc63188d1661869db7c12dd53a2d82d 00:25:56.765 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.pt4 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e972ec8f1c56d25e5da01459f52b3e6eefc63188d1661869db7c12dd53a2d82d 3 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e972ec8f1c56d25e5da01459f52b3e6eefc63188d1661869db7c12dd53a2d82d 3 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e972ec8f1c56d25e5da01459f52b3e6eefc63188d1661869db7c12dd53a2d82d 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.pt4 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.pt4 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.pt4 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fb5554d238e834fa28c959394a7c1680db8fe96487b34e99 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.klA 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fb5554d238e834fa28c959394a7c1680db8fe96487b34e99 0 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fb5554d238e834fa28c959394a7c1680db8fe96487b34e99 0 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fb5554d238e834fa28c959394a7c1680db8fe96487b34e99 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.klA 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.klA 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.klA 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e22feb34d5b1f6277fb7ccf423942b5fb312d01fbfa7878f 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.6gm 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e22feb34d5b1f6277fb7ccf423942b5fb312d01fbfa7878f 2 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e22feb34d5b1f6277fb7ccf423942b5fb312d01fbfa7878f 2 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e22feb34d5b1f6277fb7ccf423942b5fb312d01fbfa7878f 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.6gm 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.6gm 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6gm 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9d5ac64532d4b427b29ec04d1c0e298d 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.QzO 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9d5ac64532d4b427b29ec04d1c0e298d 1 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9d5ac64532d4b427b29ec04d1c0e298d 1 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9d5ac64532d4b427b29ec04d1c0e298d 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.QzO 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.QzO 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.QzO 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.766 19:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:56.766 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=33a82589f81ee032a6c41b5e0ad02f4c 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GvC 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 33a82589f81ee032a6c41b5e0ad02f4c 1 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 33a82589f81ee032a6c41b5e0ad02f4c 1 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=33a82589f81ee032a6c41b5e0ad02f4c 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GvC 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GvC 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.GvC 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=951dccd9e95f78a44ec7e466156b2274f244de1f6d6bd241 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.joC 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 951dccd9e95f78a44ec7e466156b2274f244de1f6d6bd241 2 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 951dccd9e95f78a44ec7e466156b2274f244de1f6d6bd241 2 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=951dccd9e95f78a44ec7e466156b2274f244de1f6d6bd241 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.joC 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.joC 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.joC 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f084f2af95589ed45a069ab91745420c 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dgK 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f084f2af95589ed45a069ab91745420c 0 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f084f2af95589ed45a069ab91745420c 0 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f084f2af95589ed45a069ab91745420c 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dgK 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dgK 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.dgK 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2b8d44d177e65e942c722f22bacacf7ea75cf5373c4eb6226b432c7086876f71 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bcA 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2b8d44d177e65e942c722f22bacacf7ea75cf5373c4eb6226b432c7086876f71 3 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2b8d44d177e65e942c722f22bacacf7ea75cf5373c4eb6226b432c7086876f71 3 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:57.026 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2b8d44d177e65e942c722f22bacacf7ea75cf5373c4eb6226b432c7086876f71 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bcA 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bcA 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.bcA 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1657942 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1657942 ']' 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:57.027 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5ju 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.pt4 ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pt4 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.klA 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6gm ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6gm 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.QzO 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.GvC ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GvC 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.joC 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.dgK ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.dgK 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bcA 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:57.287 19:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:00.644 Waiting for block devices as requested 00:26:00.644 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:00.644 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:00.903 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:00.903 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:00.903 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:01.163 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:01.163 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:01.163 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:01.163 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:01.422 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:01.423 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:01.423 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:01.682 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:01.682 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:01.682 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:01.941 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:01.941 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:02.880 No valid GPT data, bailing 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:26:02.880 00:26:02.880 Discovery Log Number of Records 2, Generation counter 2 00:26:02.880 =====Discovery Log Entry 0====== 00:26:02.880 trtype: tcp 00:26:02.880 adrfam: ipv4 00:26:02.880 subtype: current discovery subsystem 00:26:02.880 treq: not specified, sq flow control disable supported 00:26:02.880 portid: 1 00:26:02.880 trsvcid: 4420 00:26:02.880 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:02.880 traddr: 10.0.0.1 00:26:02.880 eflags: none 00:26:02.880 sectype: none 00:26:02.880 =====Discovery Log Entry 1====== 00:26:02.880 trtype: tcp 00:26:02.880 adrfam: ipv4 00:26:02.880 subtype: nvme subsystem 00:26:02.880 treq: not specified, sq flow control disable supported 00:26:02.880 portid: 1 00:26:02.880 trsvcid: 4420 00:26:02.880 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:02.880 traddr: 10.0.0.1 00:26:02.880 eflags: none 00:26:02.880 sectype: none 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:02.880 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.881 19:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.142 nvme0n1 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.142 nvme0n1 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.142 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:03.402 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.403 nvme0n1 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.403 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.663 nvme0n1 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.663 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.923 nvme0n1 00:26:03.923 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.923 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.923 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.923 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.923 19:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.923 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.183 nvme0n1 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.183 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.443 nvme0n1 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.443 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.703 nvme0n1 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.703 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.963 nvme0n1 00:26:04.963 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.963 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.963 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.963 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.963 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.963 19:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.963 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.223 nvme0n1 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.223 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.483 nvme0n1 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.483 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.742 nvme0n1 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.742 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.743 19:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.002 nvme0n1 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.002 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.261 nvme0n1 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.261 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.521 nvme0n1 00:26:06.521 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.521 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.521 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.521 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.521 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.521 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.521 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.521 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.521 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.521 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.780 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.780 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.780 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.781 19:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.781 nvme0n1 00:26:06.781 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.781 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.781 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.781 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.781 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.041 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.301 nvme0n1 00:26:07.301 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.301 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.301 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.301 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.301 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.302 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.871 nvme0n1 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.871 19:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.131 nvme0n1 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.131 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.132 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.132 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.132 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.700 nvme0n1 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.700 19:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.959 nvme0n1 00:26:08.959 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.959 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.959 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.960 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.219 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.787 nvme0n1 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.787 19:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.354 nvme0n1 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.354 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.355 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.923 nvme0n1 00:26:10.923 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.923 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.923 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.923 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.923 19:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.923 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.490 nvme0n1 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.490 19:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.058 nvme0n1 00:26:12.058 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.058 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.058 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.058 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.058 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.058 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.058 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.058 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.058 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.058 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.318 nvme0n1 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.318 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.319 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.319 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.319 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.319 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.319 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.319 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.319 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.319 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:12.319 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.319 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.578 nvme0n1 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:12.578 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.579 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.838 nvme0n1 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:12.838 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.839 19:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.120 nvme0n1 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.121 nvme0n1 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.121 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.383 nvme0n1 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.383 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.384 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.644 nvme0n1 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.644 19:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.904 nvme0n1 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:13.904 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.905 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.165 nvme0n1 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.165 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.425 nvme0n1 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.425 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.685 nvme0n1 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.685 19:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.945 nvme0n1 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.945 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.205 nvme0n1 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.205 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.464 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.464 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.464 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.464 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.465 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.465 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.465 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.465 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.465 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.465 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.465 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.465 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.465 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.465 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.465 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.724 nvme0n1 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.724 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.725 19:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.984 nvme0n1 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:15.984 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.985 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.244 nvme0n1 00:26:16.244 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.244 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.244 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.244 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.244 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.244 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.502 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.761 nvme0n1 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.761 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.762 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.762 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.762 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.762 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.762 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.762 19:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.328 nvme0n1 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.328 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.329 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.587 nvme0n1 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.587 19:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.155 nvme0n1 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.155 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.156 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.156 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.156 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.156 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.156 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.156 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.156 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:18.156 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.156 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.724 nvme0n1 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:18.724 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.725 19:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.292 nvme0n1 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.292 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.551 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.551 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.551 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.551 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.551 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.551 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.551 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.551 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.551 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.552 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.552 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.552 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:19.552 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.552 19:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.120 nvme0n1 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.120 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.693 nvme0n1 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.693 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.694 19:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.262 nvme0n1 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:21.262 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.263 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.522 nvme0n1 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:21.522 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.523 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.782 nvme0n1 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:21.782 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.783 nvme0n1 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.783 19:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.783 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.043 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.044 nvme0n1 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.044 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.304 nvme0n1 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.304 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.305 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.305 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.305 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.305 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.305 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.563 nvme0n1 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:22.563 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.564 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.823 nvme0n1 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.823 19:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.082 nvme0n1 00:26:23.082 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.082 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.082 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.082 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.082 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.082 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.082 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.082 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.083 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.342 nvme0n1 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.342 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.602 nvme0n1 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.602 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.868 nvme0n1 00:26:23.868 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.868 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.868 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.868 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.868 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.868 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.868 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.868 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.868 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.868 19:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:23.868 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.869 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.129 nvme0n1 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.129 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.387 nvme0n1 00:26:24.387 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.387 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.387 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.387 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.387 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.387 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.387 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.387 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.387 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.387 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:24.645 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.646 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.904 nvme0n1 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.904 19:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.163 nvme0n1 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.163 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.164 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.422 nvme0n1 00:26:25.422 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.422 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.422 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.422 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.422 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.681 19:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.940 nvme0n1 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.940 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.567 nvme0n1 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.567 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.568 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.568 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.568 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.568 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.568 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:26.568 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.568 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.826 nvme0n1 00:26:26.827 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.827 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.827 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.827 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.827 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.827 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.827 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.827 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.827 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.827 19:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.827 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.393 nvme0n1 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.393 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI3NmQ4ZGFjYmI1M2FiZTcyMjA1Y2E3YTE4ZDVmMDiY4HXJ: 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: ]] 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTk3MmVjOGYxYzU2ZDI1ZTVkYTAxNDU5ZjUyYjNlNmVlZmM2MzE4OGQxNjYxODY5ZGI3YzEyZGQ1M2EyZDgyZJB6pY4=: 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.394 19:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.961 nvme0n1 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.961 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.535 nvme0n1 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:28.535 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ1YWM2NDUzMmQ0YjQyN2IyOWVjMDRkMWMwZTI5OGTyk7Xe: 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: ]] 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzNhODI1ODlmODFlZTAzMmE2YzQxYjVlMGFkMDJmNGNITGCg: 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.536 19:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 nvme0n1 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTUxZGNjZDllOTVmNzhhNDRlYzdlNDY2MTU2YjIyNzRmMjQ0ZGUxZjZkNmJkMjQxhsUnTA==: 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: ]] 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA4NGYyYWY5NTU4OWVkNDVhMDY5YWI5MTc0NTQyMGMtGhpv: 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.104 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.363 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.932 nvme0n1 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI4ZDQ0ZDE3N2U2NWU5NDJjNzIyZjIyYmFjYWNmN2VhNzVjZjUzNzNjNGViNjIyNmI0MzJjNzA4Njg3NmY3Mfm/Nko=: 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.932 19:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.501 nvme0n1 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmI1NTU0ZDIzOGU4MzRmYTI4Yzk1OTM5NGE3YzE2ODBkYjhmZTk2NDg3YjM0ZTk5vpIMLg==: 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: ]] 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTIyZmViMzRkNWIxZjYyNzdmYjdjY2Y0MjM5NDJiNWZiMzEyZDAxZmJmYTc4Nzhm8yAl0w==: 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.501 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.502 request: 00:26:30.502 { 00:26:30.502 "name": "nvme0", 00:26:30.502 "trtype": "tcp", 00:26:30.502 "traddr": "10.0.0.1", 00:26:30.502 "adrfam": "ipv4", 00:26:30.502 "trsvcid": "4420", 00:26:30.502 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:30.502 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:30.502 "prchk_reftag": false, 00:26:30.502 "prchk_guard": false, 00:26:30.502 "hdgst": false, 00:26:30.502 "ddgst": false, 00:26:30.502 "method": "bdev_nvme_attach_controller", 00:26:30.502 "req_id": 1 00:26:30.502 } 00:26:30.502 Got JSON-RPC error response 00:26:30.502 response: 00:26:30.502 { 00:26:30.502 "code": -5, 00:26:30.502 "message": "Input/output error" 00:26:30.502 } 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.502 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.762 request: 00:26:30.762 { 00:26:30.762 "name": "nvme0", 00:26:30.762 "trtype": "tcp", 00:26:30.762 "traddr": "10.0.0.1", 00:26:30.762 "adrfam": "ipv4", 00:26:30.762 "trsvcid": "4420", 00:26:30.762 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:30.762 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:30.762 "prchk_reftag": false, 00:26:30.762 "prchk_guard": false, 00:26:30.762 "hdgst": false, 00:26:30.762 "ddgst": false, 00:26:30.762 "dhchap_key": "key2", 00:26:30.762 "method": "bdev_nvme_attach_controller", 00:26:30.762 "req_id": 1 00:26:30.762 } 00:26:30.762 Got JSON-RPC error response 00:26:30.762 response: 00:26:30.762 { 00:26:30.762 "code": -5, 00:26:30.762 "message": "Input/output error" 00:26:30.762 } 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.762 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.762 request: 00:26:30.762 { 00:26:30.762 "name": "nvme0", 00:26:30.762 "trtype": "tcp", 00:26:30.762 "traddr": "10.0.0.1", 00:26:30.762 "adrfam": "ipv4", 00:26:30.762 "trsvcid": "4420", 00:26:30.762 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:30.762 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:30.762 "prchk_reftag": false, 00:26:30.762 "prchk_guard": false, 00:26:30.762 "hdgst": false, 00:26:30.762 "ddgst": false, 00:26:30.762 "dhchap_key": "key1", 00:26:30.762 "dhchap_ctrlr_key": "ckey2", 00:26:30.762 "method": "bdev_nvme_attach_controller", 00:26:30.762 "req_id": 1 00:26:30.762 } 00:26:30.762 Got JSON-RPC error response 00:26:30.762 response: 00:26:30.762 { 00:26:30.762 "code": -5, 00:26:30.763 "message": "Input/output error" 00:26:30.763 } 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.763 rmmod nvme_tcp 00:26:30.763 rmmod nvme_fabrics 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1657942 ']' 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1657942 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1657942 ']' 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1657942 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.763 19:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1657942 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1657942' 00:26:31.023 killing process with pid 1657942 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1657942 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1657942 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.023 19:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.560 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:33.560 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:33.560 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:33.560 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:33.560 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:33.560 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:33.560 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:33.560 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:33.560 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:33.560 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:33.561 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:33.561 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:33.561 19:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:36.853 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:36.853 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:38.230 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:26:38.230 19:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5ju /tmp/spdk.key-null.klA /tmp/spdk.key-sha256.QzO /tmp/spdk.key-sha384.joC /tmp/spdk.key-sha512.bcA /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:38.230 19:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:41.521 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:41.521 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:41.521 00:26:41.521 real 0m52.515s 00:26:41.521 user 0m44.675s 00:26:41.521 sys 0m14.917s 00:26:41.521 19:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:41.521 19:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.521 ************************************ 00:26:41.521 END TEST nvmf_auth_host 00:26:41.521 ************************************ 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.781 ************************************ 00:26:41.781 START TEST nvmf_digest 00:26:41.781 ************************************ 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:41.781 * Looking for test storage... 00:26:41.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:41.781 19:27:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:48.354 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:48.355 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:48.355 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:48.355 Found net devices under 0000:af:00.0: cvl_0_0 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:48.355 Found net devices under 0000:af:00.1: cvl_0_1 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:48.355 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:48.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:26:48.614 00:26:48.614 --- 10.0.0.2 ping statistics --- 00:26:48.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.614 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:26:48.614 00:26:48.614 --- 10.0.0.1 ping statistics --- 00:26:48.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.614 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.614 ************************************ 00:26:48.614 START TEST nvmf_digest_clean 00:26:48.614 ************************************ 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1672101 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1672101 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1672101 ']' 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:48.614 19:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.614 [2024-07-24 19:27:34.849322] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:48.614 [2024-07-24 19:27:34.849369] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.873 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.873 [2024-07-24 19:27:34.921864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.873 [2024-07-24 19:27:34.993825] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.873 [2024-07-24 19:27:34.993860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.873 [2024-07-24 19:27:34.993870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.873 [2024-07-24 19:27:34.993882] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.874 [2024-07-24 19:27:34.993889] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.874 [2024-07-24 19:27:34.993910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.507 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.507 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:49.507 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:49.507 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.507 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.507 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.507 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:49.507 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:49.507 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:49.507 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.508 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.768 null0 00:26:49.768 [2024-07-24 19:27:35.769345] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.768 [2024-07-24 19:27:35.793536] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1672310 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1672310 /var/tmp/bperf.sock 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1672310 ']' 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:49.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:49.768 19:27:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.768 [2024-07-24 19:27:35.848030] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:49.768 [2024-07-24 19:27:35.848080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1672310 ] 00:26:49.768 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.768 [2024-07-24 19:27:35.918456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.768 [2024-07-24 19:27:35.992721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.705 19:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:50.705 19:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:50.705 19:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:50.705 19:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:50.705 19:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:50.705 19:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.705 19:27:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.273 nvme0n1 00:26:51.273 19:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:51.273 19:27:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:51.273 Running I/O for 2 seconds... 00:26:53.180 00:26:53.180 Latency(us) 00:26:53.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.180 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:53.180 nvme0n1 : 2.00 28638.50 111.87 0.00 0.00 4464.82 2136.47 11901.34 00:26:53.180 =================================================================================================================== 00:26:53.180 Total : 28638.50 111.87 0.00 0.00 4464.82 2136.47 11901.34 00:26:53.180 0 00:26:53.180 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:53.180 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:53.180 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:53.180 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:53.180 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:53.180 | select(.opcode=="crc32c") 00:26:53.180 | "\(.module_name) \(.executed)"' 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1672310 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1672310 ']' 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1672310 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1672310 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1672310' 00:26:53.439 killing process with pid 1672310 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1672310 00:26:53.439 Received shutdown signal, test time was about 2.000000 seconds 00:26:53.439 00:26:53.439 Latency(us) 00:26:53.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.439 =================================================================================================================== 00:26:53.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:53.439 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1672310 00:26:53.698 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:53.698 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:53.698 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:53.698 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:53.698 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:53.699 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:53.699 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:53.699 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1672926 00:26:53.699 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1672926 /var/tmp/bperf.sock 00:26:53.699 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:53.699 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1672926 ']' 00:26:53.699 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:53.699 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.699 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:53.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:53.699 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.699 19:27:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:53.699 [2024-07-24 19:27:39.838433] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:53.699 [2024-07-24 19:27:39.838487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1672926 ] 00:26:53.699 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:53.699 Zero copy mechanism will not be used. 00:26:53.699 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.699 [2024-07-24 19:27:39.908255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.958 [2024-07-24 19:27:39.975937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.526 19:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.526 19:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:54.526 19:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:54.526 19:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:54.526 19:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:54.785 19:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.785 19:27:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.044 nvme0n1 00:26:55.044 19:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:55.044 19:27:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:55.044 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:55.044 Zero copy mechanism will not be used. 00:26:55.044 Running I/O for 2 seconds... 00:26:56.949 00:26:56.949 Latency(us) 00:26:56.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.949 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:56.949 nvme0n1 : 2.00 3854.54 481.82 0.00 0.00 4148.39 983.04 7811.89 00:26:56.949 =================================================================================================================== 00:26:56.949 Total : 3854.54 481.82 0.00 0.00 4148.39 983.04 7811.89 00:26:56.949 0 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:57.213 | select(.opcode=="crc32c") 00:26:57.213 | "\(.module_name) \(.executed)"' 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1672926 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1672926 ']' 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1672926 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1672926 00:26:57.213 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:57.214 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:57.214 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1672926' 00:26:57.214 killing process with pid 1672926 00:26:57.214 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1672926 00:26:57.214 Received shutdown signal, test time was about 2.000000 seconds 00:26:57.214 00:26:57.214 Latency(us) 00:26:57.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.214 =================================================================================================================== 00:26:57.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.214 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1672926 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1673560 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1673560 /var/tmp/bperf.sock 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1673560 ']' 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:57.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:57.475 19:27:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:57.475 [2024-07-24 19:27:43.656933] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:57.475 [2024-07-24 19:27:43.656987] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1673560 ] 00:26:57.475 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.733 [2024-07-24 19:27:43.727346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.733 [2024-07-24 19:27:43.801816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.300 19:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:58.300 19:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:58.300 19:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:58.300 19:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:58.300 19:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:58.559 19:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.559 19:27:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.127 nvme0n1 00:26:59.127 19:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:59.127 19:27:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:59.127 Running I/O for 2 seconds... 00:27:01.033 00:27:01.033 Latency(us) 00:27:01.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.033 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:01.033 nvme0n1 : 2.00 29644.00 115.80 0.00 0.00 4312.22 1887.44 8965.32 00:27:01.033 =================================================================================================================== 00:27:01.033 Total : 29644.00 115.80 0.00 0.00 4312.22 1887.44 8965.32 00:27:01.033 0 00:27:01.033 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:01.033 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:01.033 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:01.033 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:01.033 | select(.opcode=="crc32c") 00:27:01.033 | "\(.module_name) \(.executed)"' 00:27:01.033 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1673560 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1673560 ']' 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1673560 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1673560 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1673560' 00:27:01.292 killing process with pid 1673560 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1673560 00:27:01.292 Received shutdown signal, test time was about 2.000000 seconds 00:27:01.292 00:27:01.292 Latency(us) 00:27:01.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.292 =================================================================================================================== 00:27:01.292 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:01.292 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1673560 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1674263 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1674263 /var/tmp/bperf.sock 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1674263 ']' 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:01.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:01.551 19:27:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:01.551 [2024-07-24 19:27:47.668803] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:01.551 [2024-07-24 19:27:47.668856] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1674263 ] 00:27:01.551 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:01.551 Zero copy mechanism will not be used. 00:27:01.551 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.551 [2024-07-24 19:27:47.739963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.810 [2024-07-24 19:27:47.803682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.378 19:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:02.378 19:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:02.378 19:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:02.378 19:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:02.378 19:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:02.638 19:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:02.638 19:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:02.897 nvme0n1 00:27:02.897 19:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:02.897 19:27:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:02.897 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:02.897 Zero copy mechanism will not be used. 00:27:02.897 Running I/O for 2 seconds... 00:27:04.838 00:27:04.838 Latency(us) 00:27:04.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.838 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:04.838 nvme0n1 : 2.00 4955.25 619.41 0.00 0.00 3223.76 1979.19 14994.64 00:27:04.838 =================================================================================================================== 00:27:04.838 Total : 4955.25 619.41 0.00 0.00 3223.76 1979.19 14994.64 00:27:04.838 0 00:27:04.838 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:04.838 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:04.838 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:04.838 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:04.838 | select(.opcode=="crc32c") 00:27:04.838 | "\(.module_name) \(.executed)"' 00:27:04.838 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1674263 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1674263 ']' 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1674263 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1674263 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1674263' 00:27:05.098 killing process with pid 1674263 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1674263 00:27:05.098 Received shutdown signal, test time was about 2.000000 seconds 00:27:05.098 00:27:05.098 Latency(us) 00:27:05.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.098 =================================================================================================================== 00:27:05.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.098 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1674263 00:27:05.358 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1672101 00:27:05.358 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1672101 ']' 00:27:05.358 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1672101 00:27:05.358 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:05.358 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:05.358 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1672101 00:27:05.358 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:05.358 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:05.358 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1672101' 00:27:05.358 killing process with pid 1672101 00:27:05.358 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1672101 00:27:05.358 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1672101 00:27:05.618 00:27:05.618 real 0m16.930s 00:27:05.618 user 0m31.817s 00:27:05.618 sys 0m5.059s 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:05.618 ************************************ 00:27:05.618 END TEST nvmf_digest_clean 00:27:05.618 ************************************ 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:05.618 ************************************ 00:27:05.618 START TEST nvmf_digest_error 00:27:05.618 ************************************ 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1674979 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1674979 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1674979 ']' 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.618 19:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.878 [2024-07-24 19:27:51.868266] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:05.878 [2024-07-24 19:27:51.868316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.878 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.878 [2024-07-24 19:27:51.942372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.878 [2024-07-24 19:27:52.014827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.878 [2024-07-24 19:27:52.014866] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.878 [2024-07-24 19:27:52.014876] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.878 [2024-07-24 19:27:52.014885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.878 [2024-07-24 19:27:52.014892] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.878 [2024-07-24 19:27:52.014912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.447 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.447 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:06.447 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:06.447 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:06.447 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.707 [2024-07-24 19:27:52.708952] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.707 null0 00:27:06.707 [2024-07-24 19:27:52.801093] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.707 [2024-07-24 19:27:52.825276] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1675120 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1675120 /var/tmp/bperf.sock 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1675120 ']' 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:06.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.707 19:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.707 [2024-07-24 19:27:52.876451] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:06.707 [2024-07-24 19:27:52.876499] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1675120 ] 00:27:06.707 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.707 [2024-07-24 19:27:52.945688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.967 [2024-07-24 19:27:53.022241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.534 19:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.534 19:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:07.534 19:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.534 19:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.793 19:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:07.793 19:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.793 19:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.793 19:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.793 19:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.793 19:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.051 nvme0n1 00:27:08.051 19:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:08.051 19:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.051 19:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.051 19:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.051 19:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:08.051 19:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:08.310 Running I/O for 2 seconds... 00:27:08.310 [2024-07-24 19:27:54.380396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.310 [2024-07-24 19:27:54.380430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-24 19:27:54.380442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.310 [2024-07-24 19:27:54.389640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.310 [2024-07-24 19:27:54.389665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-24 19:27:54.389676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.310 [2024-07-24 19:27:54.398944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.310 [2024-07-24 19:27:54.398967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-24 19:27:54.398978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.310 [2024-07-24 19:27:54.407591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.310 [2024-07-24 19:27:54.407614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-24 19:27:54.407625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.310 [2024-07-24 19:27:54.416841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.310 [2024-07-24 19:27:54.416862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-24 19:27:54.416872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.310 [2024-07-24 19:27:54.425138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.310 [2024-07-24 19:27:54.425159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-24 19:27:54.425170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.310 [2024-07-24 19:27:54.434681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.310 [2024-07-24 19:27:54.434703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-24 19:27:54.434720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.310 [2024-07-24 19:27:54.443381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.310 [2024-07-24 19:27:54.443403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-24 19:27:54.443418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.310 [2024-07-24 19:27:54.452874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.310 [2024-07-24 19:27:54.452896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-24 19:27:54.452906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.310 [2024-07-24 19:27:54.461646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.310 [2024-07-24 19:27:54.461668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-24 19:27:54.461679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.310 [2024-07-24 19:27:54.470090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.310 [2024-07-24 19:27:54.470113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-24 19:27:54.470123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.310 [2024-07-24 19:27:54.479283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.310 [2024-07-24 19:27:54.479305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-24 19:27:54.479316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-24 19:27:54.488507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.311 [2024-07-24 19:27:54.488529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-24 19:27:54.488540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-24 19:27:54.497334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.311 [2024-07-24 19:27:54.497356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-24 19:27:54.497366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-24 19:27:54.506189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.311 [2024-07-24 19:27:54.506211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-24 19:27:54.506222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-24 19:27:54.515553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.311 [2024-07-24 19:27:54.515578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-24 19:27:54.515589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-24 19:27:54.524726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.311 [2024-07-24 19:27:54.524752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-24 19:27:54.524763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-24 19:27:54.533977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.311 [2024-07-24 19:27:54.534001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-24 19:27:54.534011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-24 19:27:54.542502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.311 [2024-07-24 19:27:54.542526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-24 19:27:54.542536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.551643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.551666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.551677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.560950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.560972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.560983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.569903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.569924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.569934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.577374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.577396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.577407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.586571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.586593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.586604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.596899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.596920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.596931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.604797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.604818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.604828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.614035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.614058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.614068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.623557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.623580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.623591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.632028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.632051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.632061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.641238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.641261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.641272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.651137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.651159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.651170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.658751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.658774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.658784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.668504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.668528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.668538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.678136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.678160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.678174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.686476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.686499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.686510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.695842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.695865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.571 [2024-07-24 19:27:54.695875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.571 [2024-07-24 19:27:54.704182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.571 [2024-07-24 19:27:54.704205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.572 [2024-07-24 19:27:54.704216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.572 [2024-07-24 19:27:54.712450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.572 [2024-07-24 19:27:54.712474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.572 [2024-07-24 19:27:54.712484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.572 [2024-07-24 19:27:54.722232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.572 [2024-07-24 19:27:54.722255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.572 [2024-07-24 19:27:54.722266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.572 [2024-07-24 19:27:54.732206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.572 [2024-07-24 19:27:54.732228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.572 [2024-07-24 19:27:54.732239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.572 [2024-07-24 19:27:54.740235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.572 [2024-07-24 19:27:54.740257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.572 [2024-07-24 19:27:54.740268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.572 [2024-07-24 19:27:54.750491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.572 [2024-07-24 19:27:54.750514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.572 [2024-07-24 19:27:54.750525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.572 [2024-07-24 19:27:54.758358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.572 [2024-07-24 19:27:54.758380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.572 [2024-07-24 19:27:54.758391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.572 [2024-07-24 19:27:54.768259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.572 [2024-07-24 19:27:54.768282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.572 [2024-07-24 19:27:54.768293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.572 [2024-07-24 19:27:54.777603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.572 [2024-07-24 19:27:54.777626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.572 [2024-07-24 19:27:54.777637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.572 [2024-07-24 19:27:54.787330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.572 [2024-07-24 19:27:54.787353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.572 [2024-07-24 19:27:54.787363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.572 [2024-07-24 19:27:54.795915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.572 [2024-07-24 19:27:54.795938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.572 [2024-07-24 19:27:54.795949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.572 [2024-07-24 19:27:54.806406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.572 [2024-07-24 19:27:54.806430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.572 [2024-07-24 19:27:54.806441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.815221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.815244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.815255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.824097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.824119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.824129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.833767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.833789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.833804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.842045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.842068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.842079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.851241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.851264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.851275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.861637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.861660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.861671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.870671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.870695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.870707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.881277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.881300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.881311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.891420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.891444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.891455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.899418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.899442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.899452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.909165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.909189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.909200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.918345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.918371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.918382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.927712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.927741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.927752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.935658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.935680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.935691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.945653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.945676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.945687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.954312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.954335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.954346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.962684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.962707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.962723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.971487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.832 [2024-07-24 19:27:54.971511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.832 [2024-07-24 19:27:54.971521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.832 [2024-07-24 19:27:54.980728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.833 [2024-07-24 19:27:54.980749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.833 [2024-07-24 19:27:54.980760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.833 [2024-07-24 19:27:54.990139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.833 [2024-07-24 19:27:54.990163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.833 [2024-07-24 19:27:54.990173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.833 [2024-07-24 19:27:54.998048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.833 [2024-07-24 19:27:54.998070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.833 [2024-07-24 19:27:54.998081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.833 [2024-07-24 19:27:55.007366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.833 [2024-07-24 19:27:55.007389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.833 [2024-07-24 19:27:55.007400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.833 [2024-07-24 19:27:55.016342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.833 [2024-07-24 19:27:55.016364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.833 [2024-07-24 19:27:55.016374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.833 [2024-07-24 19:27:55.024368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.833 [2024-07-24 19:27:55.024391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.833 [2024-07-24 19:27:55.024401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.833 [2024-07-24 19:27:55.034136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.833 [2024-07-24 19:27:55.034158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.833 [2024-07-24 19:27:55.034168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.833 [2024-07-24 19:27:55.043001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.833 [2024-07-24 19:27:55.043023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.833 [2024-07-24 19:27:55.043034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.833 [2024-07-24 19:27:55.051266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.833 [2024-07-24 19:27:55.051288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.833 [2024-07-24 19:27:55.051299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.833 [2024-07-24 19:27:55.060589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:08.833 [2024-07-24 19:27:55.060611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.833 [2024-07-24 19:27:55.060622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.093 [2024-07-24 19:27:55.070662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.093 [2024-07-24 19:27:55.070686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.093 [2024-07-24 19:27:55.070700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.093 [2024-07-24 19:27:55.079077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.093 [2024-07-24 19:27:55.079100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.093 [2024-07-24 19:27:55.079110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.093 [2024-07-24 19:27:55.088031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.093 [2024-07-24 19:27:55.088054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.093 [2024-07-24 19:27:55.088065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.093 [2024-07-24 19:27:55.096201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.093 [2024-07-24 19:27:55.096224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.093 [2024-07-24 19:27:55.096234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.093 [2024-07-24 19:27:55.105237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.093 [2024-07-24 19:27:55.105260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.093 [2024-07-24 19:27:55.105270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.093 [2024-07-24 19:27:55.114755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.093 [2024-07-24 19:27:55.114778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.093 [2024-07-24 19:27:55.114788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.093 [2024-07-24 19:27:55.123461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.093 [2024-07-24 19:27:55.123482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.093 [2024-07-24 19:27:55.123493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.093 [2024-07-24 19:27:55.132501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.093 [2024-07-24 19:27:55.132523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.132534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.140464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.140487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.140498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.150048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.150070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.150081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.158963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.158986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.158996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.167488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.167511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.167522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.176502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.176526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.176536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.186282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.186306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.186317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.194148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.194171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.194182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.204247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.204270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.204281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.212998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.213021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.213032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.221272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.221295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.221308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.231434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.231457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.231467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.239287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.239309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.239319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.249041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.249064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.249075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.258021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.258043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.258054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.266239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.266261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.266271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.275485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.275508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.275519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.284719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.284741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.284752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.292647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.292670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.292680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.302293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.302318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.302329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.310380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.310402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.310413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.320217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.320240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.320250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.094 [2024-07-24 19:27:55.329358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.094 [2024-07-24 19:27:55.329381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.094 [2024-07-24 19:27:55.329392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.337585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.337608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.337619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.347025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.347047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.347057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.355633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.355655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.355666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.364293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.364315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.364325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.373994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.374016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.374026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.382237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.382260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.382270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.391742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.391764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.391775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.400886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.400908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.400918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.408955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.408977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.408988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.418681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.418704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.418719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.426896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.426918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.426929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.436090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.436113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.436123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.445380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.445402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.445412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.455178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.455199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.455213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.462927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.462949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.462959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.472188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.472210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.472220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.481091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.481113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.481123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.490293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.490315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.490326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.498597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.498619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.498629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.507763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.507784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.507795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.516093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.516115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.516125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.527495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.527518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.527528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.535602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.535628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.355 [2024-07-24 19:27:55.535638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.355 [2024-07-24 19:27:55.545527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.355 [2024-07-24 19:27:55.545549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.356 [2024-07-24 19:27:55.545559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.356 [2024-07-24 19:27:55.553263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.356 [2024-07-24 19:27:55.553285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.356 [2024-07-24 19:27:55.553296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.356 [2024-07-24 19:27:55.562627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.356 [2024-07-24 19:27:55.562649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.356 [2024-07-24 19:27:55.562659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.356 [2024-07-24 19:27:55.571849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.356 [2024-07-24 19:27:55.571872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.356 [2024-07-24 19:27:55.571883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.356 [2024-07-24 19:27:55.580854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.356 [2024-07-24 19:27:55.580876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.356 [2024-07-24 19:27:55.580887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.356 [2024-07-24 19:27:55.588901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.356 [2024-07-24 19:27:55.588923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.356 [2024-07-24 19:27:55.588934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.616 [2024-07-24 19:27:55.598376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.616 [2024-07-24 19:27:55.598399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.616 [2024-07-24 19:27:55.598410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.616 [2024-07-24 19:27:55.607772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.616 [2024-07-24 19:27:55.607793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.616 [2024-07-24 19:27:55.607806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.616 [2024-07-24 19:27:55.616419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.616 [2024-07-24 19:27:55.616442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.616 [2024-07-24 19:27:55.616453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.624927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.624948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.624959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.633759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.633781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.633792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.642760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.642782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.642793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.651511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.651533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.651543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.660113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.660136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.660147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.669549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.669571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.669581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.678276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.678300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.678310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.687823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.687848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.687859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.695448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.695470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.695480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.705490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.705512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.705523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.714504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.714526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.714536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.722464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.722486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.722497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.733157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.733180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.733190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.740988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.741010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.741020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.749753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.749774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.749784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.758637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.758659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.758669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.767798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.767820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.767830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.776945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.776968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.776978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.785052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.785075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.785085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.794302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.794324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.794335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.804272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.804293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.804303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.813030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.813052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.813062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.821507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.821529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.821539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.830487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.830510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.617 [2024-07-24 19:27:55.830520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.617 [2024-07-24 19:27:55.839483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.617 [2024-07-24 19:27:55.839504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.618 [2024-07-24 19:27:55.839517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.618 [2024-07-24 19:27:55.848272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.618 [2024-07-24 19:27:55.848295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.618 [2024-07-24 19:27:55.848305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.858401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.858424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.878 [2024-07-24 19:27:55.858435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.867216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.867238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.878 [2024-07-24 19:27:55.867249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.875784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.875806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.878 [2024-07-24 19:27:55.875817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.884999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.885021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.878 [2024-07-24 19:27:55.885031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.893158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.893180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.878 [2024-07-24 19:27:55.893190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.902346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.902368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.878 [2024-07-24 19:27:55.902378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.910868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.910893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.878 [2024-07-24 19:27:55.910904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.920486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.920515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.878 [2024-07-24 19:27:55.920526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.929926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.929948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.878 [2024-07-24 19:27:55.929959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.938012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.938034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.878 [2024-07-24 19:27:55.938044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.947336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.947359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.878 [2024-07-24 19:27:55.947370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.956555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.956577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.878 [2024-07-24 19:27:55.956587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.878 [2024-07-24 19:27:55.964323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.878 [2024-07-24 19:27:55.964344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:55.964355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:55.974530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:55.974552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:55.974563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:55.983123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:55.983144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:55.983155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:55.992012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:55.992034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:55.992044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.002117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.002140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.002150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.010339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.010361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.010371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.018972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.018994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.019004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.027399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.027421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.027432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.036625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.036647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.036658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.045456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.045478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.045489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.054087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.054108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.054118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.062838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.062860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.062870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.071899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.071923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.071934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.081227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.081249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.081259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.089534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.089556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.089566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.099480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.099502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.099512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.879 [2024-07-24 19:27:56.107238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:09.879 [2024-07-24 19:27:56.107259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.879 [2024-07-24 19:27:56.107270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.118272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.118296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.118307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.127098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.127119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.127130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.136138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.136159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.136170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.145738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.145760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.145770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.153331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.153353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.153364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.163907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.163929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.163940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.173282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.173304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.173315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.182326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.182349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.182359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.190864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.190886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.190896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.199561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.199583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.199593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.208239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.208261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.208272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.217836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.217857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.217868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.226286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.226308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.226322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.235083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.235106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.235116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.244040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.244062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.244073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.253768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.253791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.253802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.262863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.262886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.262897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.270883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.270906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.270917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.280639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.280662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.280673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.289452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.289476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.289487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.297037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.297060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.297071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.140 [2024-07-24 19:27:56.307367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.140 [2024-07-24 19:27:56.307405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.140 [2024-07-24 19:27:56.307416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.141 [2024-07-24 19:27:56.314887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.141 [2024-07-24 19:27:56.314910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.141 [2024-07-24 19:27:56.314920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.141 [2024-07-24 19:27:56.325526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.141 [2024-07-24 19:27:56.325551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.141 [2024-07-24 19:27:56.325562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.141 [2024-07-24 19:27:56.335036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.141 [2024-07-24 19:27:56.335059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.141 [2024-07-24 19:27:56.335069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.141 [2024-07-24 19:27:56.343119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.141 [2024-07-24 19:27:56.343142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.141 [2024-07-24 19:27:56.343152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.141 [2024-07-24 19:27:56.353603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.141 [2024-07-24 19:27:56.353627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.141 [2024-07-24 19:27:56.353639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.141 [2024-07-24 19:27:56.362503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.141 [2024-07-24 19:27:56.362527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.141 [2024-07-24 19:27:56.362538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.141 [2024-07-24 19:27:56.370270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dec1c0) 00:27:10.141 [2024-07-24 19:27:56.370292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.141 [2024-07-24 19:27:56.370303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.141 00:27:10.141 Latency(us) 00:27:10.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.141 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:10.141 nvme0n1 : 2.00 28312.28 110.59 0.00 0.00 4516.07 2070.94 12845.06 00:27:10.141 =================================================================================================================== 00:27:10.141 Total : 28312.28 110.59 0.00 0.00 4516.07 2070.94 12845.06 00:27:10.141 0 00:27:10.400 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:10.400 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:10.400 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:10.400 | .driver_specific 00:27:10.400 | .nvme_error 00:27:10.400 | .status_code 00:27:10.400 | .command_transient_transport_error' 00:27:10.400 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:10.400 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 )) 00:27:10.400 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1675120 00:27:10.400 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1675120 ']' 00:27:10.400 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1675120 00:27:10.401 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:10.401 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:10.401 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1675120 00:27:10.401 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:10.401 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:10.401 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1675120' 00:27:10.401 killing process with pid 1675120 00:27:10.401 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1675120 00:27:10.401 Received shutdown signal, test time was about 2.000000 seconds 00:27:10.401 00:27:10.401 Latency(us) 00:27:10.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.401 =================================================================================================================== 00:27:10.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.401 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1675120 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1675896 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1675896 /var/tmp/bperf.sock 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1675896 ']' 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:10.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:10.660 19:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.660 [2024-07-24 19:27:56.862304] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:10.660 [2024-07-24 19:27:56.862358] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1675896 ] 00:27:10.660 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:10.660 Zero copy mechanism will not be used. 00:27:10.660 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.919 [2024-07-24 19:27:56.932450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.919 [2024-07-24 19:27:57.007035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.488 19:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:11.488 19:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:11.488 19:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.488 19:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.747 19:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:11.747 19:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.747 19:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.747 19:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.747 19:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.747 19:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.007 nvme0n1 00:27:12.007 19:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:12.007 19:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.007 19:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.007 19:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.007 19:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:12.007 19:27:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:12.007 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:12.007 Zero copy mechanism will not be used. 00:27:12.007 Running I/O for 2 seconds... 00:27:12.007 [2024-07-24 19:27:58.188268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.007 [2024-07-24 19:27:58.188303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.007 [2024-07-24 19:27:58.188319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.007 [2024-07-24 19:27:58.199524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.007 [2024-07-24 19:27:58.199551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.007 [2024-07-24 19:27:58.199564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.007 [2024-07-24 19:27:58.208290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.007 [2024-07-24 19:27:58.208312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.007 [2024-07-24 19:27:58.208323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.007 [2024-07-24 19:27:58.217129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.007 [2024-07-24 19:27:58.217153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.007 [2024-07-24 19:27:58.217165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.007 [2024-07-24 19:27:58.225598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.007 [2024-07-24 19:27:58.225622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.007 [2024-07-24 19:27:58.225632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.007 [2024-07-24 19:27:58.233350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.007 [2024-07-24 19:27:58.233373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.007 [2024-07-24 19:27:58.233384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.007 [2024-07-24 19:27:58.241052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.007 [2024-07-24 19:27:58.241076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.007 [2024-07-24 19:27:58.241087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.248752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.248778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.248790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.256100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.256123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.256134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.263006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.263033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.263043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.269507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.269530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.269540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.275769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.275791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.275801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.282218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.282242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.282253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.288223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.288246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.288256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.294118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.294141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.294152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.300063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.300086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.300096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.305943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.305965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.305976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.311897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.311920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.311931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.317720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.317742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.317753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.323544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.323567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.323577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.329355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.329379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.329390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.335238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.335260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.335270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.340541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.340564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.340574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.346359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.346382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.346393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.352204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.352227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.352237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.358043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.358066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.358077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.363808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.363831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.268 [2024-07-24 19:27:58.363845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.268 [2024-07-24 19:27:58.369686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.268 [2024-07-24 19:27:58.369710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.369726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.375567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.375589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.375599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.381456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.381479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.381490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.387317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.387340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.387350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.393384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.393408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.393418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.399343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.399367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.399378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.405156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.405180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.405191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.411206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.411230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.411240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.417092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.417119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.417130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.423000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.423024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.423034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.428958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.428980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.428991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.434910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.434934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.434944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.440957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.440979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.440990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.446998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.447021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.447032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.452925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.452949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.452960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.458765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.458788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.458799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.465168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.465192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.465205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.477954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.477978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.477989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.488240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.488263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.488274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.269 [2024-07-24 19:27:58.497567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.269 [2024-07-24 19:27:58.497590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.269 [2024-07-24 19:27:58.497600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.530 [2024-07-24 19:27:58.507506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.530 [2024-07-24 19:27:58.507531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.530 [2024-07-24 19:27:58.507542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.530 [2024-07-24 19:27:58.517341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.530 [2024-07-24 19:27:58.517364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.530 [2024-07-24 19:27:58.517375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.530 [2024-07-24 19:27:58.526947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.530 [2024-07-24 19:27:58.526972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.530 [2024-07-24 19:27:58.526983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.530 [2024-07-24 19:27:58.536630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.530 [2024-07-24 19:27:58.536654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.530 [2024-07-24 19:27:58.536665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.545845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.545868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.545879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.559728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.559755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.559766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.570194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.570217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.570227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.579800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.579823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.579833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.589123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.589146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.589157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.600123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.600148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.600158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.612807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.612831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.612842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.623529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.623553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.623564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.633625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.633649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.633660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.643276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.643299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.643310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.653578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.653602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.653613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.661940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.661964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.661975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.670661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.670685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.670695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.678022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.678046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.678056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.684627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.684651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.684661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.691196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.691219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.691229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.698127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.698149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.698160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.705053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.705077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.705087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.717505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.717529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.717543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.730538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.730562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.730572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.740442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.740465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.740476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.749486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.749509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.749521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.757784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.757807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.757818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.531 [2024-07-24 19:27:58.765392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.531 [2024-07-24 19:27:58.765415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.531 [2024-07-24 19:27:58.765426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.791 [2024-07-24 19:27:58.772302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.791 [2024-07-24 19:27:58.772326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.791 [2024-07-24 19:27:58.772336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.779193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.779217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.779227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.791414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.791437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.791456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.801526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.801552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.801562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.810923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.810946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.810957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.818475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.818499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.818509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.825564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.825587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.825597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.833472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.833495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.833505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.844424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.844447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.844457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.855503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.855526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.855537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.865483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.865506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.865516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.876360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.876383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.876393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.887060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.887083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.887094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.897464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.897488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.897499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.907739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.907763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.907773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.917825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.917849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.917859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.926985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.927008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.927019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.936283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.936305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.936316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.948253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.948276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.948287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.960836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.960859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.960870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.973878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.973902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.973916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.984321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.984345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.984356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:58.993963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:58.993987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:58.993998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:59.004537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:59.004562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:59.004573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:59.013086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.792 [2024-07-24 19:27:59.013110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.792 [2024-07-24 19:27:59.013121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.792 [2024-07-24 19:27:59.021853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:12.793 [2024-07-24 19:27:59.021888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.793 [2024-07-24 19:27:59.021899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.031064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.031090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.031101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.039420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.039445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.039456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.049050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.049075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.049086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.057814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.057838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.057849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.065844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.065868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.065879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.073864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.073888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.073899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.081355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.081379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.081390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.088870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.088893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.088904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.096608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.096632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.096642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.104168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.104191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.104202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.110911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.110933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.110944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.116999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.117022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.117035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.053 [2024-07-24 19:27:59.123259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.053 [2024-07-24 19:27:59.123282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.053 [2024-07-24 19:27:59.123293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.129263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.129285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.129295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.135256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.135279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.135290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.141138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.141161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.141172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.146923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.146945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.146956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.152750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.152772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.152782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.158600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.158622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.158632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.164403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.164426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.164436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.170278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.170305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.170315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.176065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.176088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.176098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.183555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.183579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.183589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.192657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.192682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.192692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.201443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.201467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.201478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.210280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.210304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.210316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.219154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.219177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.219187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.228137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.228161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.228172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.237887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.237911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.237922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.246010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.246033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.246044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.255206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.255230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.255240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.264410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.264434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.264445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.273258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.273282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.273293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.282475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.282499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.282510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.054 [2024-07-24 19:27:59.290950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.054 [2024-07-24 19:27:59.290974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.054 [2024-07-24 19:27:59.290985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.298247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.298271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.298282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.305189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.305212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.305223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.311667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.311690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.311704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.317981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.318003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.318014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.324024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.324045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.324056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.330001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.330023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.330033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.335764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.335787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.335797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.341579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.341602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.341613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.347471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.347494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.347504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.353278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.353301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.353311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.359116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.359138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.359148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.364902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.364928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.364938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.370742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.370764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.370774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.376543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.376566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.376576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.382374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.382397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.382407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.388134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.388157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.388167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.393964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.393987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.393997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.399782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.399804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.399814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.315 [2024-07-24 19:27:59.405537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.315 [2024-07-24 19:27:59.405559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.315 [2024-07-24 19:27:59.405570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.411377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.411400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.411410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.417264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.417287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.417297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.423178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.423203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.423214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.429049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.429073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.429083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.434993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.435016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.435027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.440853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.440875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.440886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.446698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.446726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.446736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.452496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.452519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.452529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.458369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.458392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.458403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.464191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.464214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.464228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.470015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.470039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.470050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.475924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.475949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.475960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.481775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.481798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.481819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.487613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.487637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.487647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.493486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.493509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.493520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.499955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.499980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.499991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.508296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.508321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.508333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.516916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.516941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.516952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.525684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.525709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.525727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.533139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.533164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.533175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.541411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.541435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.541446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.316 [2024-07-24 19:27:59.550059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.316 [2024-07-24 19:27:59.550084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.316 [2024-07-24 19:27:59.550094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.558517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.558542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.558553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.567072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.567096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.567108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.575656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.575681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.575692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.584482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.584506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.584517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.593147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.593171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.593185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.602896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.602920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.602931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.611909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.611934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.611945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.621192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.621216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.621228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.630918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.630943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.630954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.640845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.640869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.640880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.650732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.650756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.650767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.660300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.660324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.660334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.669737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.669760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.669771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.678862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.678899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.577 [2024-07-24 19:27:59.678910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.577 [2024-07-24 19:27:59.687227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.577 [2024-07-24 19:27:59.687251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.687261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.694966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.694989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.695000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.702038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.702061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.702072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.708450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.708474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.708484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.714777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.714799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.714810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.720890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.720913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.720923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.726883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.726906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.726916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.733133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.733157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.733167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.739017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.739040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.739050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.744873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.744896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.744907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.750691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.750721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.750731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.756532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.756555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.756565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.762379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.762400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.762411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.768208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.768231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.768241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.774045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.774068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.774078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.779864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.779888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.779898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.785676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.785699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.785713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.791578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.791602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.791612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.797438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.797461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.797472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.803233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.803257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.803267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.809035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.809058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.809068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.578 [2024-07-24 19:27:59.814966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.578 [2024-07-24 19:27:59.814989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.578 [2024-07-24 19:27:59.815000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.820856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.820880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.820891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.826693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.826722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.826733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.832576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.832600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.832610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.838482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.838512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.838522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.844376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.844399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.844409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.850241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.850264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.850274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.856136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.856159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.856170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.862191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.862214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.862225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.868218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.868241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.868253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.874190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.874214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.874224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.880115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.880137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.880148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.886156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.886180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.886191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.892068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.892092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.892102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.898057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.898080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.898091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.903959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.903982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.903993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.909880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.909903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.909914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.915804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.915826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.915837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.921726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.921748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.921759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.927606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.927629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.927640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.933492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.933515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.933526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.939367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.939389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.839 [2024-07-24 19:27:59.939403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.839 [2024-07-24 19:27:59.945262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.839 [2024-07-24 19:27:59.945285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:27:59.945295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:27:59.951144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:27:59.951168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:27:59.951178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:27:59.957076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:27:59.957098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:27:59.957109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:27:59.962978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:27:59.963001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:27:59.963012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:27:59.968840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:27:59.968863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:27:59.968874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:27:59.974711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:27:59.974740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:27:59.974750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:27:59.980576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:27:59.980600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:27:59.980610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:27:59.986458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:27:59.986482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:27:59.986492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:27:59.992360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:27:59.992383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:27:59.992394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:27:59.998216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:27:59.998238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:27:59.998249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:28:00.004546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:28:00.004571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:28:00.004582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:28:00.010608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:28:00.010632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:28:00.010642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:28:00.016521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:28:00.016545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:28:00.016555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:28:00.022418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:28:00.022442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:28:00.022453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:28:00.032223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:28:00.032250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:28:00.032261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:28:00.038583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:28:00.038609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:28:00.038620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:28:00.045237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:28:00.045261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:28:00.045277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:28:00.051487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:28:00.051511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:28:00.051522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:28:00.057700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:28:00.057731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:28:00.057742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:28:00.063839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:28:00.063864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:28:00.063875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:28:00.069906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:28:00.069930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:28:00.069941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.840 [2024-07-24 19:28:00.075304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:13.840 [2024-07-24 19:28:00.075328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.840 [2024-07-24 19:28:00.075339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.081240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.081264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.081276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.087255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.087279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.087290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.095169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.095195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.095207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.101307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.101335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.101346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.107507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.107531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.107542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.113534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.113557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.113568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.119570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.119593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.119604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.123448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.123470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.123481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.128343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.128367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.128378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.134368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.134391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.134402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.140376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.140399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.140410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.146375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.146398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.146409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.152339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.100 [2024-07-24 19:28:00.152363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-07-24 19:28:00.152373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.100 [2024-07-24 19:28:00.158372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.101 [2024-07-24 19:28:00.158395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.101 [2024-07-24 19:28:00.158405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.101 [2024-07-24 19:28:00.164372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.101 [2024-07-24 19:28:00.164396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.101 [2024-07-24 19:28:00.164406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.101 [2024-07-24 19:28:00.170393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.101 [2024-07-24 19:28:00.170417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.101 [2024-07-24 19:28:00.170427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.101 [2024-07-24 19:28:00.176307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d51bf0) 00:27:14.101 [2024-07-24 19:28:00.176330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.101 [2024-07-24 19:28:00.176341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.101 00:27:14.101 Latency(us) 00:27:14.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.101 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:14.101 nvme0n1 : 2.00 4237.91 529.74 0.00 0.00 3772.62 789.71 14050.92 00:27:14.101 =================================================================================================================== 00:27:14.101 Total : 4237.91 529.74 0.00 0.00 3772.62 789.71 14050.92 00:27:14.101 0 00:27:14.101 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:14.101 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:14.101 | .driver_specific 00:27:14.101 | .nvme_error 00:27:14.101 | .status_code 00:27:14.101 | .command_transient_transport_error' 00:27:14.101 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:14.101 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:14.360 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 273 > 0 )) 00:27:14.360 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1675896 00:27:14.360 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1675896 ']' 00:27:14.360 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1675896 00:27:14.360 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:14.360 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:14.360 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1675896 00:27:14.360 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:14.360 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:14.360 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1675896' 00:27:14.360 killing process with pid 1675896 00:27:14.360 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1675896 00:27:14.360 Received shutdown signal, test time was about 2.000000 seconds 00:27:14.360 00:27:14.360 Latency(us) 00:27:14.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.360 =================================================================================================================== 00:27:14.360 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.360 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1675896 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1676460 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1676460 /var/tmp/bperf.sock 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1676460 ']' 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:14.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:14.620 19:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.620 [2024-07-24 19:28:00.678906] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:14.620 [2024-07-24 19:28:00.678959] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1676460 ] 00:27:14.620 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.620 [2024-07-24 19:28:00.748193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.620 [2024-07-24 19:28:00.811370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.557 19:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.557 19:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:15.557 19:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:15.557 19:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:15.557 19:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:15.557 19:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.557 19:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.557 19:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.557 19:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.557 19:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.816 nvme0n1 00:27:15.816 19:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:15.817 19:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.817 19:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.817 19:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.817 19:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:15.817 19:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:16.075 Running I/O for 2 seconds... 00:27:16.075 [2024-07-24 19:28:02.128555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190fc998 00:27:16.075 [2024-07-24 19:28:02.129465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.129499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.137272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190f7970 00:27:16.075 [2024-07-24 19:28:02.138166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.138191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.146011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190fc998 00:27:16.075 [2024-07-24 19:28:02.146894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.146915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.154826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190f7970 00:27:16.075 [2024-07-24 19:28:02.155711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.155734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.163560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190fc998 00:27:16.075 [2024-07-24 19:28:02.164365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.164386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.172332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190f7970 00:27:16.075 [2024-07-24 19:28:02.173221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.173242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.181088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190fc998 00:27:16.075 [2024-07-24 19:28:02.181973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.181994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.189747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190f7970 00:27:16.075 [2024-07-24 19:28:02.190626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.190647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.197668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.075 [2024-07-24 19:28:02.198374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.198394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.206766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190f6cc8 00:27:16.075 [2024-07-24 19:28:02.207644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.207664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.216317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.075 [2024-07-24 19:28:02.217319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.217340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.225048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.075 [2024-07-24 19:28:02.225961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.225981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.233683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.075 [2024-07-24 19:28:02.234684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.234707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.242382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.075 [2024-07-24 19:28:02.243363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.243384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.251084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.075 [2024-07-24 19:28:02.251992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.252012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.259731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.075 [2024-07-24 19:28:02.260727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.260747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.268440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.075 [2024-07-24 19:28:02.269437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.269458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.277115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.075 [2024-07-24 19:28:02.278023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.278044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.285771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.075 [2024-07-24 19:28:02.286775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.286795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.294497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.075 [2024-07-24 19:28:02.295476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.295497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.303141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.075 [2024-07-24 19:28:02.304098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.304118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.075 [2024-07-24 19:28:02.311826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.075 [2024-07-24 19:28:02.312830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.075 [2024-07-24 19:28:02.312851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.320599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.372 [2024-07-24 19:28:02.321600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.321621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.329326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.372 [2024-07-24 19:28:02.330326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.330347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.338020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.372 [2024-07-24 19:28:02.339017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.339038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.346678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.372 [2024-07-24 19:28:02.347677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.347698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.355310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.372 [2024-07-24 19:28:02.356268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.356288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.364027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.372 [2024-07-24 19:28:02.365026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.365047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.372667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.372 [2024-07-24 19:28:02.373647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.373667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.381331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.372 [2024-07-24 19:28:02.382374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.382395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.390301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.372 [2024-07-24 19:28:02.391334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.391356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.398978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.372 [2024-07-24 19:28:02.399983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.400004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.407670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.372 [2024-07-24 19:28:02.408598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.408619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.416356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.372 [2024-07-24 19:28:02.417267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.417288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.424999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.372 [2024-07-24 19:28:02.426007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.426027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.433637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.372 [2024-07-24 19:28:02.434638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.434659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.442304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.372 [2024-07-24 19:28:02.443304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.443325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.372 [2024-07-24 19:28:02.450944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.372 [2024-07-24 19:28:02.451941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.372 [2024-07-24 19:28:02.451962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.459645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.373 [2024-07-24 19:28:02.460641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.460665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.468260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.373 [2024-07-24 19:28:02.469257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.469278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.476924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.373 [2024-07-24 19:28:02.477831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.477852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.485615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.373 [2024-07-24 19:28:02.486565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.486586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.494258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.373 [2024-07-24 19:28:02.495269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.495291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.503200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.373 [2024-07-24 19:28:02.504114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.504136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.511879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.373 [2024-07-24 19:28:02.512785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.512806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.520520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.373 [2024-07-24 19:28:02.521432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.521452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.529248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.373 [2024-07-24 19:28:02.530246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.530267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.537885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.373 [2024-07-24 19:28:02.538794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.538815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.546561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.373 [2024-07-24 19:28:02.547559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.547579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.555248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.373 [2024-07-24 19:28:02.556246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.556266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.563875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.373 [2024-07-24 19:28:02.564820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.564841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.572625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.373 [2024-07-24 19:28:02.573635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.573656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.581562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.373 [2024-07-24 19:28:02.582586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.582607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.590432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.373 [2024-07-24 19:28:02.591432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.591452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.599140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.373 [2024-07-24 19:28:02.600053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.600073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.373 [2024-07-24 19:28:02.607800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.373 [2024-07-24 19:28:02.608771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.373 [2024-07-24 19:28:02.608791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.633 [2024-07-24 19:28:02.616508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.633 [2024-07-24 19:28:02.617499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.633 [2024-07-24 19:28:02.617520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.633 [2024-07-24 19:28:02.625416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.633 [2024-07-24 19:28:02.626424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.633 [2024-07-24 19:28:02.626444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.633 [2024-07-24 19:28:02.634297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.633 [2024-07-24 19:28:02.635307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.633 [2024-07-24 19:28:02.635328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.633 [2024-07-24 19:28:02.643207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.633 [2024-07-24 19:28:02.644227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.633 [2024-07-24 19:28:02.644249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.633 [2024-07-24 19:28:02.652136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.633 [2024-07-24 19:28:02.653163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.633 [2024-07-24 19:28:02.653184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.633 [2024-07-24 19:28:02.660944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.633 [2024-07-24 19:28:02.661871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.633 [2024-07-24 19:28:02.661891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.633 [2024-07-24 19:28:02.669659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.633 [2024-07-24 19:28:02.670558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.633 [2024-07-24 19:28:02.670578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.633 [2024-07-24 19:28:02.678366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.633 [2024-07-24 19:28:02.679263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.633 [2024-07-24 19:28:02.679284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.633 [2024-07-24 19:28:02.687044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.633 [2024-07-24 19:28:02.688044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.633 [2024-07-24 19:28:02.688070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.695786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.634 [2024-07-24 19:28:02.696760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.696781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.704595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.634 [2024-07-24 19:28:02.705598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.705620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.713525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.634 [2024-07-24 19:28:02.714507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.714528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.722246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.634 [2024-07-24 19:28:02.723159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.723179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.731002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.634 [2024-07-24 19:28:02.731983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.732003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.739692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.634 [2024-07-24 19:28:02.740697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.740722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.748387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.634 [2024-07-24 19:28:02.749367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.749387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.757041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.634 [2024-07-24 19:28:02.757952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.757972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.765771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.634 [2024-07-24 19:28:02.766775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.766797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.774436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.634 [2024-07-24 19:28:02.775440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.775459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.783320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.634 [2024-07-24 19:28:02.784272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.784293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.792267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.634 [2024-07-24 19:28:02.793291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.793312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.801183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.634 [2024-07-24 19:28:02.802188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.802209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.810114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.634 [2024-07-24 19:28:02.811069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.811090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.819071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.634 [2024-07-24 19:28:02.820089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.820109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.827834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.634 [2024-07-24 19:28:02.828750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.828771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.836531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.634 [2024-07-24 19:28:02.837450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.837471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.845197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.634 [2024-07-24 19:28:02.846127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.846147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.853866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.634 [2024-07-24 19:28:02.854784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.854805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.862574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.634 [2024-07-24 19:28:02.863488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.634 [2024-07-24 19:28:02.863509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.634 [2024-07-24 19:28:02.871238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.894 [2024-07-24 19:28:02.872228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.894 [2024-07-24 19:28:02.872250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.880229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.895 [2024-07-24 19:28:02.881235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.881255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.888949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.895 [2024-07-24 19:28:02.890007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.890028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.897824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.895 [2024-07-24 19:28:02.898805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.898825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.906691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.895 [2024-07-24 19:28:02.907673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.907694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.915392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.895 [2024-07-24 19:28:02.916417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.916442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.924025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.895 [2024-07-24 19:28:02.925027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.925047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.932746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.895 [2024-07-24 19:28:02.933737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.933758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.941399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.895 [2024-07-24 19:28:02.942401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.942422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.950060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.895 [2024-07-24 19:28:02.951060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.951080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.958784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.895 [2024-07-24 19:28:02.959693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.959713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.967433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.895 [2024-07-24 19:28:02.968343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.968363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.976113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.895 [2024-07-24 19:28:02.977111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.977132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.984830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.895 [2024-07-24 19:28:02.985824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.985845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:02.993482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.895 [2024-07-24 19:28:02.994487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:02.994511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:03.002172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.895 [2024-07-24 19:28:03.003172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:03.003193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:03.010880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.895 [2024-07-24 19:28:03.011870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:03.011891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:03.019588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.895 [2024-07-24 19:28:03.020593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:03.020614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:03.028327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.895 [2024-07-24 19:28:03.029258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:03.029278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:03.037014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.895 [2024-07-24 19:28:03.038015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:03.038036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:03.045692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.895 [2024-07-24 19:28:03.046677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:03.046698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:03.054406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.895 [2024-07-24 19:28:03.055387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:03.055408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:03.063041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.895 [2024-07-24 19:28:03.064051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:03.064071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:03.071728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.895 [2024-07-24 19:28:03.072725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:03.072745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:03.080424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.895 [2024-07-24 19:28:03.081465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.895 [2024-07-24 19:28:03.081485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.895 [2024-07-24 19:28:03.089066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.895 [2024-07-24 19:28:03.089992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.896 [2024-07-24 19:28:03.090012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.896 [2024-07-24 19:28:03.097774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.896 [2024-07-24 19:28:03.098767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.896 [2024-07-24 19:28:03.098788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.896 [2024-07-24 19:28:03.106422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:16.896 [2024-07-24 19:28:03.107338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.896 [2024-07-24 19:28:03.107358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.896 [2024-07-24 19:28:03.115088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:16.896 [2024-07-24 19:28:03.116067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.896 [2024-07-24 19:28:03.116087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.896 [2024-07-24 19:28:03.123811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:16.896 [2024-07-24 19:28:03.124787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.896 [2024-07-24 19:28:03.124808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:16.896 [2024-07-24 19:28:03.132508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.156 [2024-07-24 19:28:03.133423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.133444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.141259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.156 [2024-07-24 19:28:03.142316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.142337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.150138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.156 [2024-07-24 19:28:03.151067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.151088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.158791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.156 [2024-07-24 19:28:03.159762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.159783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.167498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.156 [2024-07-24 19:28:03.168408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.168429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.176196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.156 [2024-07-24 19:28:03.177194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.177215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.184846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.156 [2024-07-24 19:28:03.185837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.185858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.193565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.156 [2024-07-24 19:28:03.194569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.194590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.202288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.156 [2024-07-24 19:28:03.203285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.203306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.210935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.156 [2024-07-24 19:28:03.211866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.211887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.219655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.156 [2024-07-24 19:28:03.220651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.220675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.228297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.156 [2024-07-24 19:28:03.229305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.229326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.236972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.156 [2024-07-24 19:28:03.237947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.237967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.245648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.156 [2024-07-24 19:28:03.246563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.246584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.254300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.156 [2024-07-24 19:28:03.255300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.156 [2024-07-24 19:28:03.255320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.156 [2024-07-24 19:28:03.263001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.156 [2024-07-24 19:28:03.264018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.264038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.271679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.157 [2024-07-24 19:28:03.272615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.272635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.280330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.157 [2024-07-24 19:28:03.281311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.281332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.289042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.157 [2024-07-24 19:28:03.290023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.290044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.297674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.157 [2024-07-24 19:28:03.298657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.298678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.306343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.157 [2024-07-24 19:28:03.307323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.307344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.315042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.157 [2024-07-24 19:28:03.316040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.316060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.323679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.157 [2024-07-24 19:28:03.324662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.324682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.332366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.157 [2024-07-24 19:28:03.333279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.333299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.341018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.157 [2024-07-24 19:28:03.341993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.342014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.349662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.157 [2024-07-24 19:28:03.350686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.350706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.358394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.157 [2024-07-24 19:28:03.359369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.359389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.367036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.157 [2024-07-24 19:28:03.368035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.368055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.375677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.157 [2024-07-24 19:28:03.376653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.376673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.384372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.157 [2024-07-24 19:28:03.385349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.385369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.157 [2024-07-24 19:28:03.392988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.157 [2024-07-24 19:28:03.393948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.157 [2024-07-24 19:28:03.393969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.401899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.416 [2024-07-24 19:28:03.402872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.402892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.410572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.416 [2024-07-24 19:28:03.411557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.411578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.419182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.416 [2024-07-24 19:28:03.420184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.420204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.427905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.416 [2024-07-24 19:28:03.428881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.428901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.436626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.416 [2024-07-24 19:28:03.437607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.437627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.445272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.416 [2024-07-24 19:28:03.446268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.446291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.453990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.416 [2024-07-24 19:28:03.454918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.454938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.462616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.416 [2024-07-24 19:28:03.463615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.463635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.471255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.416 [2024-07-24 19:28:03.472232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.472252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.479959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.416 [2024-07-24 19:28:03.480934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.480954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.488593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.416 [2024-07-24 19:28:03.489577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.489598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.497428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.416 [2024-07-24 19:28:03.498478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.498498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.506162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.416 [2024-07-24 19:28:03.507164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.507187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.514799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.416 [2024-07-24 19:28:03.515713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.515737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.523524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.416 [2024-07-24 19:28:03.524491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.524511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.532184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.416 [2024-07-24 19:28:03.533119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.533139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.540853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.416 [2024-07-24 19:28:03.541855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.541875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.549573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.416 [2024-07-24 19:28:03.550556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.550577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.558199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.416 [2024-07-24 19:28:03.559202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.559222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.566866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.416 [2024-07-24 19:28:03.567836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.567856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.575542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.416 [2024-07-24 19:28:03.576471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.576491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.584174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.416 [2024-07-24 19:28:03.585149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.585169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.592872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.416 [2024-07-24 19:28:03.593865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.593885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.601541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.416 [2024-07-24 19:28:03.602517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.602537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.610182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.416 [2024-07-24 19:28:03.611185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.611206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.618933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.416 [2024-07-24 19:28:03.619957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.619977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.627607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.416 [2024-07-24 19:28:03.628613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.628634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.636274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.416 [2024-07-24 19:28:03.637204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.637224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.644977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.416 [2024-07-24 19:28:03.646013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.416 [2024-07-24 19:28:03.646033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.416 [2024-07-24 19:28:03.653888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.676 [2024-07-24 19:28:03.654905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.654926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.662728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.676 [2024-07-24 19:28:03.663730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.663751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.671396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.676 [2024-07-24 19:28:03.672399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.672422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.680035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.676 [2024-07-24 19:28:03.681037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.681057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.688751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.676 [2024-07-24 19:28:03.689744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.689765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.697426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.676 [2024-07-24 19:28:03.698354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.698375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.706075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.676 [2024-07-24 19:28:03.707075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.707096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.714788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.676 [2024-07-24 19:28:03.715762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.715783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.723394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.676 [2024-07-24 19:28:03.724383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.724403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.731992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.676 [2024-07-24 19:28:03.732927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.732948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.740685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.676 [2024-07-24 19:28:03.741683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.741704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.749308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.676 [2024-07-24 19:28:03.750314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.750334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.758012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.676 [2024-07-24 19:28:03.759027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.759047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.766671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.676 [2024-07-24 19:28:03.767623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.767643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.775324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.676 [2024-07-24 19:28:03.776320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.776341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.784036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.676 [2024-07-24 19:28:03.785035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.785056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.792677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.676 [2024-07-24 19:28:03.793675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.793696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.801206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.676 [2024-07-24 19:28:03.802188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.802209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.809899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.676 [2024-07-24 19:28:03.810898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.810918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.818521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.676 [2024-07-24 19:28:03.819461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.819481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.827307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.676 [2024-07-24 19:28:03.828236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.828256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.835995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.676 [2024-07-24 19:28:03.836912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.676 [2024-07-24 19:28:03.836933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.676 [2024-07-24 19:28:03.844655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.677 [2024-07-24 19:28:03.845660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.677 [2024-07-24 19:28:03.845680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.677 [2024-07-24 19:28:03.853371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.677 [2024-07-24 19:28:03.854373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.677 [2024-07-24 19:28:03.854394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.677 [2024-07-24 19:28:03.862086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.677 [2024-07-24 19:28:03.863025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.677 [2024-07-24 19:28:03.863045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.677 [2024-07-24 19:28:03.870738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.677 [2024-07-24 19:28:03.871659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.677 [2024-07-24 19:28:03.871678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.677 [2024-07-24 19:28:03.879448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.677 [2024-07-24 19:28:03.880356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.677 [2024-07-24 19:28:03.880376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.677 [2024-07-24 19:28:03.888193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.677 [2024-07-24 19:28:03.889174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.677 [2024-07-24 19:28:03.889195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.677 [2024-07-24 19:28:03.896863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.677 [2024-07-24 19:28:03.897909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.677 [2024-07-24 19:28:03.897933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.677 [2024-07-24 19:28:03.905727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.677 [2024-07-24 19:28:03.906630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.677 [2024-07-24 19:28:03.906651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.936 [2024-07-24 19:28:03.914510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.936 [2024-07-24 19:28:03.915531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.936 [2024-07-24 19:28:03.915552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.936 [2024-07-24 19:28:03.923384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.936 [2024-07-24 19:28:03.924386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.936 [2024-07-24 19:28:03.924407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.936 [2024-07-24 19:28:03.932037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.936 [2024-07-24 19:28:03.933039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.936 [2024-07-24 19:28:03.933060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.936 [2024-07-24 19:28:03.940667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.936 [2024-07-24 19:28:03.941576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.936 [2024-07-24 19:28:03.941596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.936 [2024-07-24 19:28:03.949372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.936 [2024-07-24 19:28:03.950370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.936 [2024-07-24 19:28:03.950390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.936 [2024-07-24 19:28:03.958004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.937 [2024-07-24 19:28:03.958952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:03.958972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:03.966655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.937 [2024-07-24 19:28:03.967651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:03.967671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:03.975363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.937 [2024-07-24 19:28:03.976365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:03.976385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:03.984008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.937 [2024-07-24 19:28:03.985012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:03.985033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:03.992696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.937 [2024-07-24 19:28:03.993694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:03.993718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.001364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.937 [2024-07-24 19:28:04.002271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.002291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.009986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.937 [2024-07-24 19:28:04.010937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.010958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.018697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.937 [2024-07-24 19:28:04.019695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.019719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.027363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.937 [2024-07-24 19:28:04.028273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.028294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.035992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.937 [2024-07-24 19:28:04.036985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.037005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.044684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.937 [2024-07-24 19:28:04.045684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.045704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.053331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.937 [2024-07-24 19:28:04.054246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.054266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.062006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.937 [2024-07-24 19:28:04.062916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.062936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.070695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.937 [2024-07-24 19:28:04.071697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.071721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.079343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.937 [2024-07-24 19:28:04.080257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.080277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.088021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.937 [2024-07-24 19:28:04.089026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.089046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.096657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190feb58 00:27:17.937 [2024-07-24 19:28:04.097652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.097672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.105293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e1b48 00:27:17.937 [2024-07-24 19:28:04.106292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.106313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 [2024-07-24 19:28:04.113989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22ff810) with pdu=0x2000190e5220 00:27:17.937 [2024-07-24 19:28:04.114985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.937 [2024-07-24 19:28:04.115005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.937 00:27:17.937 Latency(us) 00:27:17.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.937 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:17.937 nvme0n1 : 2.00 29209.88 114.10 0.00 0.00 4376.85 1690.83 15938.36 00:27:17.937 =================================================================================================================== 00:27:17.937 Total : 29209.88 114.10 0.00 0.00 4376.85 1690.83 15938.36 00:27:17.937 0 00:27:17.937 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:17.937 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:17.937 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:17.937 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:17.937 | .driver_specific 00:27:17.937 | .nvme_error 00:27:17.937 | .status_code 00:27:17.937 | .command_transient_transport_error' 00:27:18.196 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 229 > 0 )) 00:27:18.197 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1676460 00:27:18.197 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1676460 ']' 00:27:18.197 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1676460 00:27:18.197 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:18.197 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.197 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1676460 00:27:18.197 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:18.197 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:18.197 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1676460' 00:27:18.197 killing process with pid 1676460 00:27:18.197 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1676460 00:27:18.197 Received shutdown signal, test time was about 2.000000 seconds 00:27:18.197 00:27:18.197 Latency(us) 00:27:18.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.197 =================================================================================================================== 00:27:18.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.197 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1676460 00:27:18.455 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:18.455 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:18.455 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:18.455 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:18.455 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:18.455 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1677124 00:27:18.456 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:18.456 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1677124 /var/tmp/bperf.sock 00:27:18.456 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1677124 ']' 00:27:18.456 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:18.456 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:18.456 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:18.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:18.456 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:18.456 19:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.456 [2024-07-24 19:28:04.594675] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:18.456 [2024-07-24 19:28:04.594734] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1677124 ] 00:27:18.456 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:18.456 Zero copy mechanism will not be used. 00:27:18.456 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.456 [2024-07-24 19:28:04.663886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.714 [2024-07-24 19:28:04.738230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.281 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:19.281 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:19.281 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:19.281 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:19.540 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:19.540 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.540 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:19.540 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.540 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:19.540 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:19.799 nvme0n1 00:27:19.799 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:19.799 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.799 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:19.799 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.799 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:19.799 19:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:19.799 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:19.799 Zero copy mechanism will not be used. 00:27:19.799 Running I/O for 2 seconds... 00:27:19.799 [2024-07-24 19:28:06.009233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:19.799 [2024-07-24 19:28:06.009638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.799 [2024-07-24 19:28:06.009672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.799 [2024-07-24 19:28:06.019913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:19.799 [2024-07-24 19:28:06.020274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.799 [2024-07-24 19:28:06.020299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.799 [2024-07-24 19:28:06.027630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:19.799 [2024-07-24 19:28:06.028004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.799 [2024-07-24 19:28:06.028026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.799 [2024-07-24 19:28:06.033379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:19.799 [2024-07-24 19:28:06.033747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.799 [2024-07-24 19:28:06.033769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.059 [2024-07-24 19:28:06.039146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.059 [2024-07-24 19:28:06.039519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.059 [2024-07-24 19:28:06.039541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.059 [2024-07-24 19:28:06.044874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.059 [2024-07-24 19:28:06.045226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.059 [2024-07-24 19:28:06.045248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.059 [2024-07-24 19:28:06.050696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.059 [2024-07-24 19:28:06.051060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.059 [2024-07-24 19:28:06.051081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.059 [2024-07-24 19:28:06.056467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.059 [2024-07-24 19:28:06.056835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.059 [2024-07-24 19:28:06.056856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.059 [2024-07-24 19:28:06.062693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.059 [2024-07-24 19:28:06.063055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.059 [2024-07-24 19:28:06.063078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.059 [2024-07-24 19:28:06.069018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.059 [2024-07-24 19:28:06.069379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.059 [2024-07-24 19:28:06.069400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.059 [2024-07-24 19:28:06.075449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.059 [2024-07-24 19:28:06.075809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.059 [2024-07-24 19:28:06.075831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.059 [2024-07-24 19:28:06.081795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.059 [2024-07-24 19:28:06.082132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.059 [2024-07-24 19:28:06.082153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.059 [2024-07-24 19:28:06.087355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.059 [2024-07-24 19:28:06.087707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.059 [2024-07-24 19:28:06.087733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.059 [2024-07-24 19:28:06.092846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.059 [2024-07-24 19:28:06.093196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.059 [2024-07-24 19:28:06.093217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.098686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.099053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.099074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.105056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.105401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.105422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.111303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.111654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.111675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.118078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.118426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.118450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.124513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.124856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.124877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.130263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.130613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.130634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.135377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.135707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.135733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.140963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.141284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.141305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.146313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.146649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.146670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.151899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.152230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.152250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.158124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.158446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.158466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.164448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.164788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.164809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.170030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.170360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.170380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.175558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.175896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.175917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.180968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.181301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.181322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.186414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.186760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.186780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.192039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.192388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.192410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.198741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.199074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.199094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.204816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.205149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.205170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.210433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.210779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.210799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.215942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.216270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.216290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.221091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.221424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.221445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.225993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.226322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.226343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.231173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.231500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.060 [2024-07-24 19:28:06.231521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.060 [2024-07-24 19:28:06.236573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.060 [2024-07-24 19:28:06.236912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.061 [2024-07-24 19:28:06.236933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.061 [2024-07-24 19:28:06.242926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.061 [2024-07-24 19:28:06.243258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.061 [2024-07-24 19:28:06.243279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.061 [2024-07-24 19:28:06.249435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.061 [2024-07-24 19:28:06.249843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.061 [2024-07-24 19:28:06.249864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.061 [2024-07-24 19:28:06.255416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.061 [2024-07-24 19:28:06.255749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.061 [2024-07-24 19:28:06.255769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.061 [2024-07-24 19:28:06.261040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.061 [2024-07-24 19:28:06.261391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.061 [2024-07-24 19:28:06.261412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.061 [2024-07-24 19:28:06.266568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.061 [2024-07-24 19:28:06.266928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.061 [2024-07-24 19:28:06.266952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.061 [2024-07-24 19:28:06.272403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.061 [2024-07-24 19:28:06.272750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.061 [2024-07-24 19:28:06.272770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.061 [2024-07-24 19:28:06.277569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.061 [2024-07-24 19:28:06.277912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.061 [2024-07-24 19:28:06.277933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.061 [2024-07-24 19:28:06.282640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.061 [2024-07-24 19:28:06.282973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.061 [2024-07-24 19:28:06.282994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.061 [2024-07-24 19:28:06.288075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.061 [2024-07-24 19:28:06.288411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.061 [2024-07-24 19:28:06.288432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.061 [2024-07-24 19:28:06.293353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.061 [2024-07-24 19:28:06.293725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.061 [2024-07-24 19:28:06.293746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.321 [2024-07-24 19:28:06.298535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.321 [2024-07-24 19:28:06.298876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.321 [2024-07-24 19:28:06.298897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.303642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.303976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.303998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.308743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.309084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.309104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.314090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.314447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.314468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.319075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.319413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.319433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.324416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.324756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.324777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.329896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.330218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.330239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.335011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.335343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.335363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.340250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.340582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.340602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.345756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.346136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.346157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.350980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.351340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.351362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.356020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.356361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.356382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.361394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.361766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.361787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.366369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.366701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.366728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.371269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.371604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.371624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.376618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.376953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.376974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.382216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.382541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.382562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.387396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.387736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.387757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.392493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.392844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.392865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.397584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.397919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.397940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.402518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.402860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.402885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.408106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.408440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.408460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.413071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.413390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.413410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.418150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.322 [2024-07-24 19:28:06.418511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.322 [2024-07-24 19:28:06.418532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.322 [2024-07-24 19:28:06.423139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.423479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.423500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.428491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.428840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.428861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.433712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.434068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.434089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.439256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.439593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.439614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.444836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.445190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.445210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.449811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.450149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.450170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.455168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.455517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.455537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.460787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.461127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.461148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.466625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.466965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.466985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.472909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.473226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.473247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.479142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.479476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.479496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.486116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.486480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.486500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.492199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.492534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.492555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.498926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.499258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.499282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.505057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.505396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.505416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.511299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.511838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.511858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.517654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.517999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.518020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.524557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.524920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.524940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.531944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.532296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.532317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.538642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.539001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.539022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.544917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.545259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.545279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.550760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.551094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.551115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.323 [2024-07-24 19:28:06.556881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.323 [2024-07-24 19:28:06.557217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.323 [2024-07-24 19:28:06.557238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.562445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.562802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.562823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.567831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.568169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.568191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.574152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.574487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.574508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.580397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.580739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.580760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.585790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.586119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.586139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.591190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.591529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.591550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.596993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.597326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.597346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.602099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.602486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.602506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.607710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.608058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.608078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.612914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.613241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.613262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.623612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.624313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.624334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.634412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.634916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.634936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.642485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.642814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.642834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.648614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.648971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.648991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.654091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.654421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.654441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.660252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.660675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.660696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.666271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.666622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.666650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.671833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.672172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.672192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.678453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.678792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.678813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.684832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.685180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.685200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.584 [2024-07-24 19:28:06.691919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.584 [2024-07-24 19:28:06.692306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.584 [2024-07-24 19:28:06.692326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.698439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.698829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.698849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.705355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.705703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.705728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.712685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.713039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.713059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.718313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.718636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.718656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.726753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.727095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.727115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.733432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.733803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.733824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.738956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.739306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.739326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.745082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.745433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.745453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.750177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.750524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.750543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.755281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.755606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.755626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.761556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.761963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.761984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.767404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.767748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.767769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.772887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.773226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.773247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.778484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.778842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.778863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.784358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.784736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.784757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.792449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.792888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.792908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.800218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.800659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.800680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.808164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.808549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.808570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.815692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.585 [2024-07-24 19:28:06.816055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.585 [2024-07-24 19:28:06.816075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.585 [2024-07-24 19:28:06.821866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.822202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.822224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.827250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.827596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.827617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.833240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.833590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.833614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.839411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.839761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.839782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.844555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.844894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.844915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.849644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.850002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.850022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.855575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.855911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.855932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.865031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.865604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.865624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.875894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.876334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.876356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.884521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.884886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.884907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.891473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.891810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.891830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.897396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.897737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.897757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.905016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.905371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.905392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.913390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.913820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.913840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.923120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.923518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.923538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.932703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.933103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.933123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.940827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.941040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.941059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.949096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.949332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.949353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.958252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.958709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.958733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.967763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.968193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.845 [2024-07-24 19:28:06.968214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.845 [2024-07-24 19:28:06.976076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.845 [2024-07-24 19:28:06.976481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:06.976502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:06.983492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:06.983926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:06.983946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:06.991291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:06.991732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:06.991752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:06.998836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:06.999253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:06.999273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:07.006203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:07.006577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:07.006597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:07.013662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:07.014084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:07.014104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:07.021831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:07.022233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:07.022255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:07.030207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:07.030623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:07.030643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:07.039274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:07.039690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:07.039718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:07.048097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:07.048516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:07.048537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:07.056686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:07.057077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:07.057098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:07.065779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:07.066126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:07.066146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:07.073954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:07.074208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:07.074228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.846 [2024-07-24 19:28:07.081822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:20.846 [2024-07-24 19:28:07.082156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.846 [2024-07-24 19:28:07.082177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.107 [2024-07-24 19:28:07.089331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.107 [2024-07-24 19:28:07.089666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.107 [2024-07-24 19:28:07.089687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.107 [2024-07-24 19:28:07.097347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.107 [2024-07-24 19:28:07.097638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.107 [2024-07-24 19:28:07.097658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.107 [2024-07-24 19:28:07.105185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.107 [2024-07-24 19:28:07.105443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.107 [2024-07-24 19:28:07.105464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.107 [2024-07-24 19:28:07.113448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.107 [2024-07-24 19:28:07.113814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.107 [2024-07-24 19:28:07.113835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.107 [2024-07-24 19:28:07.121531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.107 [2024-07-24 19:28:07.121876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.107 [2024-07-24 19:28:07.121896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.107 [2024-07-24 19:28:07.130113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.107 [2024-07-24 19:28:07.130496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.107 [2024-07-24 19:28:07.130517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.138046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.138398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.138418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.146170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.146475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.146495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.154762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.155079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.155099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.162860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.163137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.163158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.171658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.171980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.172000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.178935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.179197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.179220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.185285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.185547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.185567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.191394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.191649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.191670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.196408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.196721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.196741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.201303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.201560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.201580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.206147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.206404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.206424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.210994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.211232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.211253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.215822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.216064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.216084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.220154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.220407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.220428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.224706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.224997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.225017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.229538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.229826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.229846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.234667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.234939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.234959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.239256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.239458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.239479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.244605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.244846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.244866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.250187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.250444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.250464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.254842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.255076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.255096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.260024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.260242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.260262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.264378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.264640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.264660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.268558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.268755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.268774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.108 [2024-07-24 19:28:07.273070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.108 [2024-07-24 19:28:07.273315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.108 [2024-07-24 19:28:07.273335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.277277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.277485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.277505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.281618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.281846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.281866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.285659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.285927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.285947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.290700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.291001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.291022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.295001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.295219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.295239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.299514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.299786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.299807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.303993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.304207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.304232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.308484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.308733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.308755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.313025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.313272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.313293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.317964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.318214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.318234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.322300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.322559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.322579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.327205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.327477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.327498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.331535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.331733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.331752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.336780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.336979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.336998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.109 [2024-07-24 19:28:07.342071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.109 [2024-07-24 19:28:07.342282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.109 [2024-07-24 19:28:07.342303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.346779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.347036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.347056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.351738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.351954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.351974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.356265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.356465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.356483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.361152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.361344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.361362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.366058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.366270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.366290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.370307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.370505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.370525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.374659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.374871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.374891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.379259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.379462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.379480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.384112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.384313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.384333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.388656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.388883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.388904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.393308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.393529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.393550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.397893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.398021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.398041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.402756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.402949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.402968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.407465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.407652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.407672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.412064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.412290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.412310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.417193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.417399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.417418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.422122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.422326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.422345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.426935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.427133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.427156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.431545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.370 [2024-07-24 19:28:07.431738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.370 [2024-07-24 19:28:07.431759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.370 [2024-07-24 19:28:07.436700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.436901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.436921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.441985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.442193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.442214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.446566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.446775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.446794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.451144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.451377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.451398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.455675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.455919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.455939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.461839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.462034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.462053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.467473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.467753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.467774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.473566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.473766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.473785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.480111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.480379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.480399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.485766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.486029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.486049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.491820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.492047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.492067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.497366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.497553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.497573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.503003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.503259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.503280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.508537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.508795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.508816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.514334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.514547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.514565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.519913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.520192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.520216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.526268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.526542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.526563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.532711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.533037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.533057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.539265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.539584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.539605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.545870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.546158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.546179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.552839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.553164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.553185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.560049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.560287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.560308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.565410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.565609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.565628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.571052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.571253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.571272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.371 [2024-07-24 19:28:07.576821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.371 [2024-07-24 19:28:07.577051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.371 [2024-07-24 19:28:07.577072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.372 [2024-07-24 19:28:07.583771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.372 [2024-07-24 19:28:07.584014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.372 [2024-07-24 19:28:07.584033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.372 [2024-07-24 19:28:07.590737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.372 [2024-07-24 19:28:07.591000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.372 [2024-07-24 19:28:07.591020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.372 [2024-07-24 19:28:07.596299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.372 [2024-07-24 19:28:07.596536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.372 [2024-07-24 19:28:07.596556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.372 [2024-07-24 19:28:07.601357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.372 [2024-07-24 19:28:07.601631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.372 [2024-07-24 19:28:07.601652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.372 [2024-07-24 19:28:07.606262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.372 [2024-07-24 19:28:07.606459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.372 [2024-07-24 19:28:07.606479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.632 [2024-07-24 19:28:07.611203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.632 [2024-07-24 19:28:07.611448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.632 [2024-07-24 19:28:07.611469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.632 [2024-07-24 19:28:07.617526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.632 [2024-07-24 19:28:07.617867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.632 [2024-07-24 19:28:07.617888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.632 [2024-07-24 19:28:07.624036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.632 [2024-07-24 19:28:07.624351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.632 [2024-07-24 19:28:07.624371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.632 [2024-07-24 19:28:07.630956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.632 [2024-07-24 19:28:07.631207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.632 [2024-07-24 19:28:07.631228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.632 [2024-07-24 19:28:07.637534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.632 [2024-07-24 19:28:07.637813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.632 [2024-07-24 19:28:07.637834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.632 [2024-07-24 19:28:07.642453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.632 [2024-07-24 19:28:07.642679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.632 [2024-07-24 19:28:07.642700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.632 [2024-07-24 19:28:07.647046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.632 [2024-07-24 19:28:07.647293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.632 [2024-07-24 19:28:07.647314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.632 [2024-07-24 19:28:07.652061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.632 [2024-07-24 19:28:07.652383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.632 [2024-07-24 19:28:07.652404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.632 [2024-07-24 19:28:07.656839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.657066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.657086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.661443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.661645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.661666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.667281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.667477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.667497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.672361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.672576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.672600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.677075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.677271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.677292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.681943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.682152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.682172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.686538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.686735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.686754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.691454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.691649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.691668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.696232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.696423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.696442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.700664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.700852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.700873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.705248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.705482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.705503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.710228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.710421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.710440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.715612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.715857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.715878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.720593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.720866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.720887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.727464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.727751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.727771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.734600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.734834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.734855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.742859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.743197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.743217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.750743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.751047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.751068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.759140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.759367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.759387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.766257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.766610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.766631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.773745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.773999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.774020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.781141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.781448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.781468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.788788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.788996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.789016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.797216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.797563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.797583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.804819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.633 [2024-07-24 19:28:07.805102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.633 [2024-07-24 19:28:07.805123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.633 [2024-07-24 19:28:07.812175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.634 [2024-07-24 19:28:07.812481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.634 [2024-07-24 19:28:07.812501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.634 [2024-07-24 19:28:07.819326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.634 [2024-07-24 19:28:07.819581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.634 [2024-07-24 19:28:07.819601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.634 [2024-07-24 19:28:07.826281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.634 [2024-07-24 19:28:07.826548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.634 [2024-07-24 19:28:07.826568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.634 [2024-07-24 19:28:07.832742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.634 [2024-07-24 19:28:07.833002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.634 [2024-07-24 19:28:07.833022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.634 [2024-07-24 19:28:07.840729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.634 [2024-07-24 19:28:07.840925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.634 [2024-07-24 19:28:07.840949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.634 [2024-07-24 19:28:07.848183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.634 [2024-07-24 19:28:07.848460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.634 [2024-07-24 19:28:07.848480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.634 [2024-07-24 19:28:07.853798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.634 [2024-07-24 19:28:07.853993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.634 [2024-07-24 19:28:07.854012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.634 [2024-07-24 19:28:07.858172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.634 [2024-07-24 19:28:07.858407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.634 [2024-07-24 19:28:07.858428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.634 [2024-07-24 19:28:07.862449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.634 [2024-07-24 19:28:07.862667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.634 [2024-07-24 19:28:07.862688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.634 [2024-07-24 19:28:07.866795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.634 [2024-07-24 19:28:07.867073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.634 [2024-07-24 19:28:07.867094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.893 [2024-07-24 19:28:07.871688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.893 [2024-07-24 19:28:07.871928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.893 [2024-07-24 19:28:07.871949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.893 [2024-07-24 19:28:07.878020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.893 [2024-07-24 19:28:07.878301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.893 [2024-07-24 19:28:07.878322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.893 [2024-07-24 19:28:07.884613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.893 [2024-07-24 19:28:07.884905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.893 [2024-07-24 19:28:07.884926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.893 [2024-07-24 19:28:07.890986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.893 [2024-07-24 19:28:07.891214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.893 [2024-07-24 19:28:07.891234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.896851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.897101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.897122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.903579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.903776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.903796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.910170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.910422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.910443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.916440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.916693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.916719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.922005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.922221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.922241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.927070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.927272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.927301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.931748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.931998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.932020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.937710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.938069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.938090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.943879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.944153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.944174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.949154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.949426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.949447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.953997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.954235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.954256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.959002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.959278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.959298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.963954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.964206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.964227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.968979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.969271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.969292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.974387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.974588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.974607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.979913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.980112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.980131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.894 [2024-07-24 19:28:07.987255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2301490) with pdu=0x2000190fef90 00:27:21.894 [2024-07-24 19:28:07.987567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.894 [2024-07-24 19:28:07.987588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.894 00:27:21.894 Latency(us) 00:27:21.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.894 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:21.894 nvme0n1 : 2.00 5100.74 637.59 0.00 0.00 3132.01 1926.76 19188.94 00:27:21.894 =================================================================================================================== 00:27:21.894 Total : 5100.74 637.59 0.00 0.00 3132.01 1926.76 19188.94 00:27:21.894 0 00:27:21.894 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:21.894 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:21.894 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:21.894 | .driver_specific 00:27:21.894 | .nvme_error 00:27:21.894 | .status_code 00:27:21.894 | .command_transient_transport_error' 00:27:21.894 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:22.154 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 329 > 0 )) 00:27:22.154 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1677124 00:27:22.154 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1677124 ']' 00:27:22.154 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1677124 00:27:22.154 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:22.154 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:22.154 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1677124 00:27:22.154 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:22.154 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:22.154 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1677124' 00:27:22.154 killing process with pid 1677124 00:27:22.154 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1677124 00:27:22.154 Received shutdown signal, test time was about 2.000000 seconds 00:27:22.154 00:27:22.154 Latency(us) 00:27:22.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.154 =================================================================================================================== 00:27:22.154 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:22.154 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1677124 00:27:22.412 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1674979 00:27:22.413 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1674979 ']' 00:27:22.413 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1674979 00:27:22.413 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:22.413 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:22.413 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1674979 00:27:22.413 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:22.413 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:22.413 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1674979' 00:27:22.413 killing process with pid 1674979 00:27:22.413 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1674979 00:27:22.413 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1674979 00:27:22.672 00:27:22.672 real 0m16.855s 00:27:22.672 user 0m31.684s 00:27:22.672 sys 0m5.151s 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.672 ************************************ 00:27:22.672 END TEST nvmf_digest_error 00:27:22.672 ************************************ 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.672 rmmod nvme_tcp 00:27:22.672 rmmod nvme_fabrics 00:27:22.672 rmmod nvme_keyring 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1674979 ']' 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1674979 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1674979 ']' 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1674979 00:27:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1674979) - No such process 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1674979 is not found' 00:27:22.672 Process with pid 1674979 is not found 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.672 19:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.207 19:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:25.207 00:27:25.207 real 0m43.023s 00:27:25.207 user 1m5.490s 00:27:25.207 sys 0m15.506s 00:27:25.207 19:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:25.207 19:28:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:25.207 ************************************ 00:27:25.207 END TEST nvmf_digest 00:27:25.207 ************************************ 00:27:25.207 19:28:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:25.207 19:28:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:25.207 19:28:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:25.207 19:28:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:25.207 19:28:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:25.207 19:28:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:25.207 19:28:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.207 ************************************ 00:27:25.207 START TEST nvmf_bdevperf 00:27:25.207 ************************************ 00:27:25.207 19:28:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:25.207 * Looking for test storage... 00:27:25.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:25.207 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.207 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:25.207 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.207 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:25.208 19:28:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:31.780 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:31.780 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:31.780 Found net devices under 0000:af:00.0: cvl_0_0 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:31.780 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:31.781 Found net devices under 0000:af:00.1: cvl_0_1 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:31.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:27:31.781 00:27:31.781 --- 10.0.0.2 ping statistics --- 00:27:31.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.781 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:27:31.781 00:27:31.781 --- 10.0.0.1 ping statistics --- 00:27:31.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.781 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1681444 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1681444 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1681444 ']' 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:31.781 19:28:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.781 [2024-07-24 19:28:17.693164] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:31.781 [2024-07-24 19:28:17.693214] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.781 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.781 [2024-07-24 19:28:17.767349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:31.781 [2024-07-24 19:28:17.840154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.781 [2024-07-24 19:28:17.840193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.781 [2024-07-24 19:28:17.840203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.781 [2024-07-24 19:28:17.840212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.781 [2024-07-24 19:28:17.840219] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.781 [2024-07-24 19:28:17.840328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.781 [2024-07-24 19:28:17.840413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.781 [2024-07-24 19:28:17.840415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.348 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:32.348 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:32.348 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:32.348 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:32.348 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.348 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.348 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:32.348 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.348 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.348 [2024-07-24 19:28:18.547023] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.348 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.349 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:32.349 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.349 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.349 Malloc0 00:27:32.349 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.349 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:32.349 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.349 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.608 [2024-07-24 19:28:18.607167] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.608 { 00:27:32.608 "params": { 00:27:32.608 "name": "Nvme$subsystem", 00:27:32.608 "trtype": "$TEST_TRANSPORT", 00:27:32.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.608 "adrfam": "ipv4", 00:27:32.608 "trsvcid": "$NVMF_PORT", 00:27:32.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.608 "hdgst": ${hdgst:-false}, 00:27:32.608 "ddgst": ${ddgst:-false} 00:27:32.608 }, 00:27:32.608 "method": "bdev_nvme_attach_controller" 00:27:32.608 } 00:27:32.608 EOF 00:27:32.608 )") 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:32.608 19:28:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:32.608 "params": { 00:27:32.608 "name": "Nvme1", 00:27:32.608 "trtype": "tcp", 00:27:32.608 "traddr": "10.0.0.2", 00:27:32.608 "adrfam": "ipv4", 00:27:32.608 "trsvcid": "4420", 00:27:32.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:32.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:32.608 "hdgst": false, 00:27:32.608 "ddgst": false 00:27:32.608 }, 00:27:32.608 "method": "bdev_nvme_attach_controller" 00:27:32.608 }' 00:27:32.608 [2024-07-24 19:28:18.660202] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:32.608 [2024-07-24 19:28:18.660251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1681509 ] 00:27:32.608 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.608 [2024-07-24 19:28:18.730776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.608 [2024-07-24 19:28:18.799876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.867 Running I/O for 1 seconds... 00:27:33.839 00:27:33.839 Latency(us) 00:27:33.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.839 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:33.839 Verification LBA range: start 0x0 length 0x4000 00:27:33.839 Nvme1n1 : 1.01 11599.06 45.31 0.00 0.00 10996.04 1874.33 14680.06 00:27:33.839 =================================================================================================================== 00:27:33.839 Total : 11599.06 45.31 0.00 0.00 10996.04 1874.33 14680.06 00:27:34.099 19:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1681785 00:27:34.099 19:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:34.099 19:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:34.099 19:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:34.099 19:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:34.099 19:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:34.099 19:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.099 19:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.099 { 00:27:34.099 "params": { 00:27:34.099 "name": "Nvme$subsystem", 00:27:34.099 "trtype": "$TEST_TRANSPORT", 00:27:34.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.099 "adrfam": "ipv4", 00:27:34.099 "trsvcid": "$NVMF_PORT", 00:27:34.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.099 "hdgst": ${hdgst:-false}, 00:27:34.099 "ddgst": ${ddgst:-false} 00:27:34.099 }, 00:27:34.099 "method": "bdev_nvme_attach_controller" 00:27:34.099 } 00:27:34.099 EOF 00:27:34.099 )") 00:27:34.099 19:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:34.099 19:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:34.099 19:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:34.099 19:28:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:34.099 "params": { 00:27:34.099 "name": "Nvme1", 00:27:34.099 "trtype": "tcp", 00:27:34.099 "traddr": "10.0.0.2", 00:27:34.099 "adrfam": "ipv4", 00:27:34.099 "trsvcid": "4420", 00:27:34.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:34.099 "hdgst": false, 00:27:34.099 "ddgst": false 00:27:34.099 }, 00:27:34.099 "method": "bdev_nvme_attach_controller" 00:27:34.099 }' 00:27:34.099 [2024-07-24 19:28:20.239652] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:34.099 [2024-07-24 19:28:20.239706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1681785 ] 00:27:34.099 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.099 [2024-07-24 19:28:20.310250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.358 [2024-07-24 19:28:20.375258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.358 Running I/O for 15 seconds... 00:27:37.650 19:28:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1681444 00:27:37.650 19:28:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:37.650 [2024-07-24 19:28:23.208998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.650 [2024-07-24 19:28:23.209044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.650 [2024-07-24 19:28:23.209062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.650 [2024-07-24 19:28:23.209073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.650 [2024-07-24 19:28:23.209086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.650 [2024-07-24 19:28:23.209097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.650 [2024-07-24 19:28:23.209109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.650 [2024-07-24 19:28:23.209120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.650 [2024-07-24 19:28:23.209132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.650 [2024-07-24 19:28:23.209142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.650 [2024-07-24 19:28:23.209153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.650 [2024-07-24 19:28:23.209164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.650 [2024-07-24 19:28:23.209176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.650 [2024-07-24 19:28:23.209186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.651 [2024-07-24 19:28:23.209862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.651 [2024-07-24 19:28:23.209873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.652 [2024-07-24 19:28:23.209884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.209895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.652 [2024-07-24 19:28:23.209905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.209916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.652 [2024-07-24 19:28:23.209925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.209936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.652 [2024-07-24 19:28:23.209947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.209958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.652 [2024-07-24 19:28:23.209967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.209978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.652 [2024-07-24 19:28:23.209987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.209998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.652 [2024-07-24 19:28:23.210007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.652 [2024-07-24 19:28:23.210028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.652 [2024-07-24 19:28:23.210504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.652 [2024-07-24 19:28:23.210514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.210982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.210991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.211002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.211011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.211021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.211031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.211042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.211051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.211061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.211070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.211081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.211090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.211101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.211110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.211120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.211129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.211140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.211149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.211160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.211170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.211181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.211191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.653 [2024-07-24 19:28:23.211203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.653 [2024-07-24 19:28:23.211213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.654 [2024-07-24 19:28:23.211805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11938c0 is same with the state(5) to be set 00:27:37.654 [2024-07-24 19:28:23.211827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:37.654 [2024-07-24 19:28:23.211834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:37.654 [2024-07-24 19:28:23.211842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121568 len:8 PRP1 0x0 PRP2 0x0 00:27:37.654 [2024-07-24 19:28:23.211852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.654 [2024-07-24 19:28:23.211896] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11938c0 was disconnected and freed. reset controller. 00:27:37.654 [2024-07-24 19:28:23.214593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.654 [2024-07-24 19:28:23.214645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.654 [2024-07-24 19:28:23.215284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-07-24 19:28:23.215337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.654 [2024-07-24 19:28:23.215371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.654 [2024-07-24 19:28:23.215962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.654 [2024-07-24 19:28:23.216132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.654 [2024-07-24 19:28:23.216143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.654 [2024-07-24 19:28:23.216156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.654 [2024-07-24 19:28:23.218846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.654 [2024-07-24 19:28:23.227835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.654 [2024-07-24 19:28:23.228152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-07-24 19:28:23.228172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.655 [2024-07-24 19:28:23.228183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.655 [2024-07-24 19:28:23.228353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.655 [2024-07-24 19:28:23.228525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.655 [2024-07-24 19:28:23.228537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.655 [2024-07-24 19:28:23.228546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.655 [2024-07-24 19:28:23.231169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.655 [2024-07-24 19:28:23.240585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.655 [2024-07-24 19:28:23.240953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-07-24 19:28:23.241017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.655 [2024-07-24 19:28:23.241050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.655 [2024-07-24 19:28:23.241640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.655 [2024-07-24 19:28:23.242248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.655 [2024-07-24 19:28:23.242292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.655 [2024-07-24 19:28:23.242302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.655 [2024-07-24 19:28:23.244807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.655 [2024-07-24 19:28:23.253376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.655 [2024-07-24 19:28:23.253842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-07-24 19:28:23.253896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.655 [2024-07-24 19:28:23.253929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.655 [2024-07-24 19:28:23.254402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.655 [2024-07-24 19:28:23.254560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.655 [2024-07-24 19:28:23.254571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.655 [2024-07-24 19:28:23.254580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.655 [2024-07-24 19:28:23.257128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.655 [2024-07-24 19:28:23.266075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.655 [2024-07-24 19:28:23.266517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-07-24 19:28:23.266569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.655 [2024-07-24 19:28:23.266601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.655 [2024-07-24 19:28:23.267080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.655 [2024-07-24 19:28:23.267248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.655 [2024-07-24 19:28:23.267260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.655 [2024-07-24 19:28:23.267269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.655 [2024-07-24 19:28:23.269771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.655 [2024-07-24 19:28:23.278850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.655 [2024-07-24 19:28:23.279261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-07-24 19:28:23.279279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.655 [2024-07-24 19:28:23.279288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.655 [2024-07-24 19:28:23.279445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.655 [2024-07-24 19:28:23.279602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.655 [2024-07-24 19:28:23.279612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.655 [2024-07-24 19:28:23.279621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.655 [2024-07-24 19:28:23.282167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.655 [2024-07-24 19:28:23.291627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.655 [2024-07-24 19:28:23.292119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-07-24 19:28:23.292137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.655 [2024-07-24 19:28:23.292146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.655 [2024-07-24 19:28:23.292304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.655 [2024-07-24 19:28:23.292460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.655 [2024-07-24 19:28:23.292471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.655 [2024-07-24 19:28:23.292480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.655 [2024-07-24 19:28:23.295028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.655 [2024-07-24 19:28:23.304397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.655 [2024-07-24 19:28:23.304878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-07-24 19:28:23.304931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.655 [2024-07-24 19:28:23.304963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.655 [2024-07-24 19:28:23.305444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.655 [2024-07-24 19:28:23.305605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.655 [2024-07-24 19:28:23.305616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.655 [2024-07-24 19:28:23.305625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.655 [2024-07-24 19:28:23.308171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.655 [2024-07-24 19:28:23.317168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.655 [2024-07-24 19:28:23.317638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-07-24 19:28:23.317690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.655 [2024-07-24 19:28:23.317743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.655 [2024-07-24 19:28:23.318264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.655 [2024-07-24 19:28:23.318430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.655 [2024-07-24 19:28:23.318442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.655 [2024-07-24 19:28:23.318451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.655 [2024-07-24 19:28:23.320944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.655 [2024-07-24 19:28:23.329936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.655 [2024-07-24 19:28:23.330443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-07-24 19:28:23.330494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.655 [2024-07-24 19:28:23.330527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.655 [2024-07-24 19:28:23.331135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.655 [2024-07-24 19:28:23.331412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.655 [2024-07-24 19:28:23.331427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.655 [2024-07-24 19:28:23.331440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.655 [2024-07-24 19:28:23.335174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.655 [2024-07-24 19:28:23.343299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.655 [2024-07-24 19:28:23.343807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-07-24 19:28:23.343858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.655 [2024-07-24 19:28:23.343891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.655 [2024-07-24 19:28:23.344316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.656 [2024-07-24 19:28:23.344474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.656 [2024-07-24 19:28:23.344485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.656 [2024-07-24 19:28:23.344493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.656 [2024-07-24 19:28:23.347046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.656 [2024-07-24 19:28:23.356042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.656 [2024-07-24 19:28:23.356546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-07-24 19:28:23.356597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.656 [2024-07-24 19:28:23.356629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.656 [2024-07-24 19:28:23.357236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.656 [2024-07-24 19:28:23.357622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.656 [2024-07-24 19:28:23.357633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.656 [2024-07-24 19:28:23.357642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.656 [2024-07-24 19:28:23.360170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.656 [2024-07-24 19:28:23.368822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.656 [2024-07-24 19:28:23.369326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-07-24 19:28:23.369378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.656 [2024-07-24 19:28:23.369410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.656 [2024-07-24 19:28:23.369881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.656 [2024-07-24 19:28:23.370048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.656 [2024-07-24 19:28:23.370060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.656 [2024-07-24 19:28:23.370069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.656 [2024-07-24 19:28:23.372576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.656 [2024-07-24 19:28:23.381506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.656 [2024-07-24 19:28:23.381914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-07-24 19:28:23.381931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.656 [2024-07-24 19:28:23.381941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.656 [2024-07-24 19:28:23.382097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.656 [2024-07-24 19:28:23.382253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.656 [2024-07-24 19:28:23.382264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.656 [2024-07-24 19:28:23.382274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.656 [2024-07-24 19:28:23.384826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.656 [2024-07-24 19:28:23.394229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.656 [2024-07-24 19:28:23.394744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-07-24 19:28:23.394797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.656 [2024-07-24 19:28:23.394837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.656 [2024-07-24 19:28:23.395408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.656 [2024-07-24 19:28:23.395566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.656 [2024-07-24 19:28:23.395577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.656 [2024-07-24 19:28:23.395586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.656 [2024-07-24 19:28:23.398132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.656 [2024-07-24 19:28:23.406923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.656 [2024-07-24 19:28:23.407392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-07-24 19:28:23.407410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.656 [2024-07-24 19:28:23.407419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.656 [2024-07-24 19:28:23.407575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.656 [2024-07-24 19:28:23.407738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.656 [2024-07-24 19:28:23.407766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.656 [2024-07-24 19:28:23.407775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.656 [2024-07-24 19:28:23.410299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.656 [2024-07-24 19:28:23.419667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.656 [2024-07-24 19:28:23.420175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-07-24 19:28:23.420227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.656 [2024-07-24 19:28:23.420259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.656 [2024-07-24 19:28:23.420750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.656 [2024-07-24 19:28:23.420932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.656 [2024-07-24 19:28:23.420943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.656 [2024-07-24 19:28:23.420953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.656 [2024-07-24 19:28:23.423471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.656 [2024-07-24 19:28:23.432422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.656 [2024-07-24 19:28:23.432970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-07-24 19:28:23.433023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.656 [2024-07-24 19:28:23.433056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.656 [2024-07-24 19:28:23.433645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.656 [2024-07-24 19:28:23.434063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.656 [2024-07-24 19:28:23.434077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.656 [2024-07-24 19:28:23.434086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.656 [2024-07-24 19:28:23.436592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.656 [2024-07-24 19:28:23.445179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.656 [2024-07-24 19:28:23.445663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-07-24 19:28:23.445729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.657 [2024-07-24 19:28:23.445763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.657 [2024-07-24 19:28:23.446353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.657 [2024-07-24 19:28:23.446799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.657 [2024-07-24 19:28:23.446811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.657 [2024-07-24 19:28:23.446820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.657 [2024-07-24 19:28:23.449287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.657 [2024-07-24 19:28:23.457842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.657 [2024-07-24 19:28:23.458308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-07-24 19:28:23.458326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.657 [2024-07-24 19:28:23.458335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.657 [2024-07-24 19:28:23.458491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.657 [2024-07-24 19:28:23.458648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.657 [2024-07-24 19:28:23.458659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.657 [2024-07-24 19:28:23.458667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.657 [2024-07-24 19:28:23.461212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.657 [2024-07-24 19:28:23.470824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.657 [2024-07-24 19:28:23.471308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-07-24 19:28:23.471326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.657 [2024-07-24 19:28:23.471336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.657 [2024-07-24 19:28:23.471506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.657 [2024-07-24 19:28:23.471677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.657 [2024-07-24 19:28:23.471687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.657 [2024-07-24 19:28:23.471696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.657 [2024-07-24 19:28:23.474368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.657 [2024-07-24 19:28:23.483820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.657 [2024-07-24 19:28:23.484338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-07-24 19:28:23.484389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.657 [2024-07-24 19:28:23.484421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.657 [2024-07-24 19:28:23.484953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.657 [2024-07-24 19:28:23.485125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.657 [2024-07-24 19:28:23.485137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.657 [2024-07-24 19:28:23.485147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.657 [2024-07-24 19:28:23.487809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.657 [2024-07-24 19:28:23.496700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.657 [2024-07-24 19:28:23.497419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-07-24 19:28:23.497441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.657 [2024-07-24 19:28:23.497451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.657 [2024-07-24 19:28:23.497617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.657 [2024-07-24 19:28:23.497789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.657 [2024-07-24 19:28:23.497801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.657 [2024-07-24 19:28:23.497810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.657 [2024-07-24 19:28:23.500407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.657 [2024-07-24 19:28:23.509564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.657 [2024-07-24 19:28:23.510049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-07-24 19:28:23.510102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.657 [2024-07-24 19:28:23.510135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.657 [2024-07-24 19:28:23.510694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.657 [2024-07-24 19:28:23.510858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.657 [2024-07-24 19:28:23.510869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.657 [2024-07-24 19:28:23.510877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.657 [2024-07-24 19:28:23.513440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.657 [2024-07-24 19:28:23.522258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.657 [2024-07-24 19:28:23.522765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-07-24 19:28:23.522818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.657 [2024-07-24 19:28:23.522851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.657 [2024-07-24 19:28:23.523448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.657 [2024-07-24 19:28:23.523721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.657 [2024-07-24 19:28:23.523736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.657 [2024-07-24 19:28:23.523750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.657 [2024-07-24 19:28:23.527488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.657 [2024-07-24 19:28:23.535253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.657 [2024-07-24 19:28:23.535770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-07-24 19:28:23.535823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.657 [2024-07-24 19:28:23.535855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.657 [2024-07-24 19:28:23.536447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.657 [2024-07-24 19:28:23.536938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.657 [2024-07-24 19:28:23.536949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.657 [2024-07-24 19:28:23.536958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.657 [2024-07-24 19:28:23.539474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.657 [2024-07-24 19:28:23.548005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.657 [2024-07-24 19:28:23.548494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-07-24 19:28:23.548547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.657 [2024-07-24 19:28:23.548579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.657 [2024-07-24 19:28:23.549074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.657 [2024-07-24 19:28:23.549240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.657 [2024-07-24 19:28:23.549252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.657 [2024-07-24 19:28:23.549261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.657 [2024-07-24 19:28:23.551764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.657 [2024-07-24 19:28:23.560692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.657 [2024-07-24 19:28:23.561169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-07-24 19:28:23.561186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.657 [2024-07-24 19:28:23.561195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.657 [2024-07-24 19:28:23.561352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.658 [2024-07-24 19:28:23.561509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.658 [2024-07-24 19:28:23.561520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.658 [2024-07-24 19:28:23.561533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.658 [2024-07-24 19:28:23.564079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.658 [2024-07-24 19:28:23.573391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.658 [2024-07-24 19:28:23.573885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-07-24 19:28:23.573937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.658 [2024-07-24 19:28:23.573969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.658 [2024-07-24 19:28:23.574557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.658 [2024-07-24 19:28:23.575092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.658 [2024-07-24 19:28:23.575104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.658 [2024-07-24 19:28:23.575113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.658 [2024-07-24 19:28:23.577621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.658 [2024-07-24 19:28:23.586122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.658 [2024-07-24 19:28:23.586614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-07-24 19:28:23.586630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.658 [2024-07-24 19:28:23.586639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.658 [2024-07-24 19:28:23.586821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.658 [2024-07-24 19:28:23.586987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.658 [2024-07-24 19:28:23.586997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.658 [2024-07-24 19:28:23.587006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.658 [2024-07-24 19:28:23.589516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.658 [2024-07-24 19:28:23.598938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.658 [2024-07-24 19:28:23.599411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-07-24 19:28:23.599429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.658 [2024-07-24 19:28:23.599441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.658 [2024-07-24 19:28:23.599599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.658 [2024-07-24 19:28:23.599778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.658 [2024-07-24 19:28:23.599789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.658 [2024-07-24 19:28:23.599799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.658 [2024-07-24 19:28:23.602325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.658 [2024-07-24 19:28:23.611641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.658 [2024-07-24 19:28:23.612095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-07-24 19:28:23.612114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.658 [2024-07-24 19:28:23.612123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.658 [2024-07-24 19:28:23.612289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.658 [2024-07-24 19:28:23.612454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.658 [2024-07-24 19:28:23.612466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.658 [2024-07-24 19:28:23.612474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.658 [2024-07-24 19:28:23.614968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.658 [2024-07-24 19:28:23.624480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.658 [2024-07-24 19:28:23.624954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-07-24 19:28:23.625007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.658 [2024-07-24 19:28:23.625040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.658 [2024-07-24 19:28:23.625427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.658 [2024-07-24 19:28:23.625594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.658 [2024-07-24 19:28:23.625605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.658 [2024-07-24 19:28:23.625615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.658 [2024-07-24 19:28:23.628099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.658 [2024-07-24 19:28:23.637259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.658 [2024-07-24 19:28:23.637690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-07-24 19:28:23.637756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.658 [2024-07-24 19:28:23.637789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.658 [2024-07-24 19:28:23.638223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.658 [2024-07-24 19:28:23.638389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.658 [2024-07-24 19:28:23.638401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.658 [2024-07-24 19:28:23.638410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.658 [2024-07-24 19:28:23.640994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.658 [2024-07-24 19:28:23.649943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.658 [2024-07-24 19:28:23.650313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-07-24 19:28:23.650332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.658 [2024-07-24 19:28:23.650342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.658 [2024-07-24 19:28:23.650507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.658 [2024-07-24 19:28:23.650675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.658 [2024-07-24 19:28:23.650686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.658 [2024-07-24 19:28:23.650695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.658 [2024-07-24 19:28:23.653181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.658 [2024-07-24 19:28:23.662709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.658 [2024-07-24 19:28:23.663141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-07-24 19:28:23.663159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.658 [2024-07-24 19:28:23.663169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.658 [2024-07-24 19:28:23.663335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.658 [2024-07-24 19:28:23.663501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.658 [2024-07-24 19:28:23.663512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.658 [2024-07-24 19:28:23.663522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.658 [2024-07-24 19:28:23.666021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.658 [2024-07-24 19:28:23.675731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.658 [2024-07-24 19:28:23.676243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-07-24 19:28:23.676299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.658 [2024-07-24 19:28:23.676336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.658 [2024-07-24 19:28:23.676946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.658 [2024-07-24 19:28:23.677182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.658 [2024-07-24 19:28:23.677193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.658 [2024-07-24 19:28:23.677203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.659 [2024-07-24 19:28:23.679885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.659 [2024-07-24 19:28:23.688545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.659 [2024-07-24 19:28:23.689025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-07-24 19:28:23.689077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.659 [2024-07-24 19:28:23.689110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.659 [2024-07-24 19:28:23.689698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.659 [2024-07-24 19:28:23.690264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.659 [2024-07-24 19:28:23.690275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.659 [2024-07-24 19:28:23.690284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.659 [2024-07-24 19:28:23.692770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.659 [2024-07-24 19:28:23.701282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.659 [2024-07-24 19:28:23.701771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-07-24 19:28:23.701790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.659 [2024-07-24 19:28:23.701799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.659 [2024-07-24 19:28:23.701957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.659 [2024-07-24 19:28:23.702115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.659 [2024-07-24 19:28:23.702125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.659 [2024-07-24 19:28:23.702134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.659 [2024-07-24 19:28:23.704678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.659 [2024-07-24 19:28:23.714079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.659 [2024-07-24 19:28:23.714610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-07-24 19:28:23.714629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.659 [2024-07-24 19:28:23.714639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.659 [2024-07-24 19:28:23.714808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.659 [2024-07-24 19:28:23.714974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.659 [2024-07-24 19:28:23.714986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.659 [2024-07-24 19:28:23.714994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.659 [2024-07-24 19:28:23.717664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.659 [2024-07-24 19:28:23.726931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.659 [2024-07-24 19:28:23.727376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-07-24 19:28:23.727394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.659 [2024-07-24 19:28:23.727404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.659 [2024-07-24 19:28:23.727570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.659 [2024-07-24 19:28:23.727742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.659 [2024-07-24 19:28:23.727754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.659 [2024-07-24 19:28:23.727763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.659 [2024-07-24 19:28:23.730355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.659 [2024-07-24 19:28:23.739863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.659 [2024-07-24 19:28:23.740290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-07-24 19:28:23.740308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.659 [2024-07-24 19:28:23.740321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.659 [2024-07-24 19:28:23.740487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.659 [2024-07-24 19:28:23.740652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.659 [2024-07-24 19:28:23.740663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.659 [2024-07-24 19:28:23.740672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.659 [2024-07-24 19:28:23.743260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.659 [2024-07-24 19:28:23.752629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.659 [2024-07-24 19:28:23.753107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-07-24 19:28:23.753159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.659 [2024-07-24 19:28:23.753191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.659 [2024-07-24 19:28:23.753697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.659 [2024-07-24 19:28:23.753859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.659 [2024-07-24 19:28:23.753871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.659 [2024-07-24 19:28:23.753880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.659 [2024-07-24 19:28:23.756427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.659 [2024-07-24 19:28:23.765505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.659 [2024-07-24 19:28:23.766053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-07-24 19:28:23.766071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.659 [2024-07-24 19:28:23.766081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.659 [2024-07-24 19:28:23.766237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.659 [2024-07-24 19:28:23.766394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.659 [2024-07-24 19:28:23.766404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.659 [2024-07-24 19:28:23.766413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.659 [2024-07-24 19:28:23.768964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.659 [2024-07-24 19:28:23.778341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.659 [2024-07-24 19:28:23.778863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-07-24 19:28:23.778916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.659 [2024-07-24 19:28:23.778949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.659 [2024-07-24 19:28:23.779537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.659 [2024-07-24 19:28:23.779844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.659 [2024-07-24 19:28:23.779858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.659 [2024-07-24 19:28:23.779867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.659 [2024-07-24 19:28:23.782375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.659 [2024-07-24 19:28:23.791149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.659 [2024-07-24 19:28:23.791639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-07-24 19:28:23.791691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.659 [2024-07-24 19:28:23.791739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.659 [2024-07-24 19:28:23.792328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.659 [2024-07-24 19:28:23.792814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.659 [2024-07-24 19:28:23.792825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.659 [2024-07-24 19:28:23.792834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.659 [2024-07-24 19:28:23.795335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.659 [2024-07-24 19:28:23.803978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.659 [2024-07-24 19:28:23.804500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-07-24 19:28:23.804517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.660 [2024-07-24 19:28:23.804527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.660 [2024-07-24 19:28:23.804683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.660 [2024-07-24 19:28:23.804869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.660 [2024-07-24 19:28:23.804881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.660 [2024-07-24 19:28:23.804890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.660 [2024-07-24 19:28:23.807429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.660 [2024-07-24 19:28:23.816798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.660 [2024-07-24 19:28:23.817151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-07-24 19:28:23.817168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.660 [2024-07-24 19:28:23.817178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.660 [2024-07-24 19:28:23.817334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.660 [2024-07-24 19:28:23.817491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.660 [2024-07-24 19:28:23.817501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.660 [2024-07-24 19:28:23.817510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.660 [2024-07-24 19:28:23.820059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.660 [2024-07-24 19:28:23.829604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.660 [2024-07-24 19:28:23.830092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-07-24 19:28:23.830111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.660 [2024-07-24 19:28:23.830121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.660 [2024-07-24 19:28:23.830286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.660 [2024-07-24 19:28:23.830452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.660 [2024-07-24 19:28:23.830463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.660 [2024-07-24 19:28:23.830472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.660 [2024-07-24 19:28:23.833073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.660 [2024-07-24 19:28:23.842442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.660 [2024-07-24 19:28:23.842942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-07-24 19:28:23.842961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.660 [2024-07-24 19:28:23.842970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.660 [2024-07-24 19:28:23.843128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.660 [2024-07-24 19:28:23.843285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.660 [2024-07-24 19:28:23.843296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.660 [2024-07-24 19:28:23.843305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.660 [2024-07-24 19:28:23.845850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.660 [2024-07-24 19:28:23.855224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.660 [2024-07-24 19:28:23.855738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-07-24 19:28:23.855790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.660 [2024-07-24 19:28:23.855823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.660 [2024-07-24 19:28:23.856376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.660 [2024-07-24 19:28:23.856614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.660 [2024-07-24 19:28:23.856630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.660 [2024-07-24 19:28:23.856643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.660 [2024-07-24 19:28:23.860389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.660 [2024-07-24 19:28:23.868415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.660 [2024-07-24 19:28:23.868856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-07-24 19:28:23.868873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.660 [2024-07-24 19:28:23.868883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.660 [2024-07-24 19:28:23.869045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.660 [2024-07-24 19:28:23.869203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.660 [2024-07-24 19:28:23.869214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.660 [2024-07-24 19:28:23.869222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.660 [2024-07-24 19:28:23.871772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.660 [2024-07-24 19:28:23.881194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.660 [2024-07-24 19:28:23.881697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-07-24 19:28:23.881765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.660 [2024-07-24 19:28:23.881798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.660 [2024-07-24 19:28:23.882123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.660 [2024-07-24 19:28:23.882290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.660 [2024-07-24 19:28:23.882302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.660 [2024-07-24 19:28:23.882311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.921 [2024-07-24 19:28:23.884910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.921 [2024-07-24 19:28:23.893985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.921 [2024-07-24 19:28:23.894497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.921 [2024-07-24 19:28:23.894515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.921 [2024-07-24 19:28:23.894524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.921 [2024-07-24 19:28:23.894681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.921 [2024-07-24 19:28:23.895037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.921 [2024-07-24 19:28:23.895050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.921 [2024-07-24 19:28:23.895059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.921 [2024-07-24 19:28:23.897641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.921 [2024-07-24 19:28:23.906806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.921 [2024-07-24 19:28:23.907241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.921 [2024-07-24 19:28:23.907292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.921 [2024-07-24 19:28:23.907324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.921 [2024-07-24 19:28:23.907792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.921 [2024-07-24 19:28:23.907950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.921 [2024-07-24 19:28:23.907961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.921 [2024-07-24 19:28:23.907973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.921 [2024-07-24 19:28:23.910518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.921 [2024-07-24 19:28:23.919591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.921 [2024-07-24 19:28:23.920082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.921 [2024-07-24 19:28:23.920134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.921 [2024-07-24 19:28:23.920167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.921 [2024-07-24 19:28:23.920772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.921 [2024-07-24 19:28:23.921173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.921 [2024-07-24 19:28:23.921184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.921 [2024-07-24 19:28:23.921193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.921 [2024-07-24 19:28:23.923740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.921 [2024-07-24 19:28:23.932329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.921 [2024-07-24 19:28:23.932748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.921 [2024-07-24 19:28:23.932766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.921 [2024-07-24 19:28:23.932775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.921 [2024-07-24 19:28:23.932932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.921 [2024-07-24 19:28:23.933090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.921 [2024-07-24 19:28:23.933101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.921 [2024-07-24 19:28:23.933109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.921 [2024-07-24 19:28:23.935654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.921 [2024-07-24 19:28:23.945117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.921 [2024-07-24 19:28:23.945619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.921 [2024-07-24 19:28:23.945671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.921 [2024-07-24 19:28:23.945703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.921 [2024-07-24 19:28:23.946037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.921 [2024-07-24 19:28:23.946204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.921 [2024-07-24 19:28:23.946215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.921 [2024-07-24 19:28:23.946225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.921 [2024-07-24 19:28:23.948730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.921 [2024-07-24 19:28:23.957927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.921 [2024-07-24 19:28:23.958364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.921 [2024-07-24 19:28:23.958382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.921 [2024-07-24 19:28:23.958394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.921 [2024-07-24 19:28:23.958560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.921 [2024-07-24 19:28:23.958732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.921 [2024-07-24 19:28:23.958746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.921 [2024-07-24 19:28:23.958756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.921 [2024-07-24 19:28:23.961236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.921 [2024-07-24 19:28:23.970782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.921 [2024-07-24 19:28:23.971246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.921 [2024-07-24 19:28:23.971264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.921 [2024-07-24 19:28:23.971274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.921 [2024-07-24 19:28:23.971439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.921 [2024-07-24 19:28:23.971604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.921 [2024-07-24 19:28:23.971616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.921 [2024-07-24 19:28:23.971625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.922 [2024-07-24 19:28:23.974295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.922 [2024-07-24 19:28:23.983710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.922 [2024-07-24 19:28:23.984198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.922 [2024-07-24 19:28:23.984216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.922 [2024-07-24 19:28:23.984226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.922 [2024-07-24 19:28:23.984392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.922 [2024-07-24 19:28:23.984557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.922 [2024-07-24 19:28:23.984568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.922 [2024-07-24 19:28:23.984577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.922 [2024-07-24 19:28:23.987176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.922 [2024-07-24 19:28:23.996522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.922 [2024-07-24 19:28:23.997000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.922 [2024-07-24 19:28:23.997019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.922 [2024-07-24 19:28:23.997029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.922 [2024-07-24 19:28:23.997194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.922 [2024-07-24 19:28:23.997362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.922 [2024-07-24 19:28:23.997374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.922 [2024-07-24 19:28:23.997383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.922 [2024-07-24 19:28:23.999982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.922 [2024-07-24 19:28:24.009276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.922 [2024-07-24 19:28:24.009741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.922 [2024-07-24 19:28:24.009795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.922 [2024-07-24 19:28:24.009827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.922 [2024-07-24 19:28:24.010253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.922 [2024-07-24 19:28:24.010419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.922 [2024-07-24 19:28:24.010431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.922 [2024-07-24 19:28:24.010440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.922 [2024-07-24 19:28:24.012936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.922 [2024-07-24 19:28:24.022066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.922 [2024-07-24 19:28:24.022535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.922 [2024-07-24 19:28:24.022587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.922 [2024-07-24 19:28:24.022620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.922 [2024-07-24 19:28:24.023091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.922 [2024-07-24 19:28:24.023250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.922 [2024-07-24 19:28:24.023261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.922 [2024-07-24 19:28:24.023270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.922 [2024-07-24 19:28:24.025866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.922 [2024-07-24 19:28:24.034893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.922 [2024-07-24 19:28:24.035293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.922 [2024-07-24 19:28:24.035311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.922 [2024-07-24 19:28:24.035321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.922 [2024-07-24 19:28:24.035478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.922 [2024-07-24 19:28:24.035634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.922 [2024-07-24 19:28:24.035645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.922 [2024-07-24 19:28:24.035654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.922 [2024-07-24 19:28:24.038211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.922 [2024-07-24 19:28:24.047759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.922 [2024-07-24 19:28:24.048239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.922 [2024-07-24 19:28:24.048302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.922 [2024-07-24 19:28:24.048334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.922 [2024-07-24 19:28:24.048941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.922 [2024-07-24 19:28:24.049450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.922 [2024-07-24 19:28:24.049461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.922 [2024-07-24 19:28:24.049470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.922 [2024-07-24 19:28:24.052013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.922 [2024-07-24 19:28:24.060473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.922 [2024-07-24 19:28:24.060962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.922 [2024-07-24 19:28:24.061014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.922 [2024-07-24 19:28:24.061046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.922 [2024-07-24 19:28:24.061507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.922 [2024-07-24 19:28:24.061665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.922 [2024-07-24 19:28:24.061676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.922 [2024-07-24 19:28:24.061685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.922 [2024-07-24 19:28:24.064187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.922 [2024-07-24 19:28:24.073269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.922 [2024-07-24 19:28:24.073762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.922 [2024-07-24 19:28:24.073791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.922 [2024-07-24 19:28:24.073801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.922 [2024-07-24 19:28:24.073957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.922 [2024-07-24 19:28:24.074113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.922 [2024-07-24 19:28:24.074124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.922 [2024-07-24 19:28:24.074132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.922 [2024-07-24 19:28:24.076623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.922 [2024-07-24 19:28:24.085985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.922 [2024-07-24 19:28:24.086503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.922 [2024-07-24 19:28:24.086520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.922 [2024-07-24 19:28:24.086533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.922 [2024-07-24 19:28:24.086689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.922 [2024-07-24 19:28:24.086875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.922 [2024-07-24 19:28:24.086888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.922 [2024-07-24 19:28:24.086897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.922 [2024-07-24 19:28:24.089415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.922 [2024-07-24 19:28:24.098725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.923 [2024-07-24 19:28:24.099164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.923 [2024-07-24 19:28:24.099225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.923 [2024-07-24 19:28:24.099259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.923 [2024-07-24 19:28:24.099867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.923 [2024-07-24 19:28:24.100446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.923 [2024-07-24 19:28:24.100456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.923 [2024-07-24 19:28:24.100466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.923 [2024-07-24 19:28:24.102953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.923 [2024-07-24 19:28:24.111573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.923 [2024-07-24 19:28:24.111986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.923 [2024-07-24 19:28:24.112005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.923 [2024-07-24 19:28:24.112014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.923 [2024-07-24 19:28:24.112171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.923 [2024-07-24 19:28:24.112328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.923 [2024-07-24 19:28:24.112339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.923 [2024-07-24 19:28:24.112348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.923 [2024-07-24 19:28:24.114832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.923 [2024-07-24 19:28:24.124369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.923 [2024-07-24 19:28:24.124804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.923 [2024-07-24 19:28:24.124821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.923 [2024-07-24 19:28:24.124831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.923 [2024-07-24 19:28:24.124997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.923 [2024-07-24 19:28:24.125162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.923 [2024-07-24 19:28:24.125176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.923 [2024-07-24 19:28:24.125185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.923 [2024-07-24 19:28:24.127693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.923 [2024-07-24 19:28:24.137191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.923 [2024-07-24 19:28:24.137649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.923 [2024-07-24 19:28:24.137699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.923 [2024-07-24 19:28:24.137748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.923 [2024-07-24 19:28:24.138252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.923 [2024-07-24 19:28:24.138410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.923 [2024-07-24 19:28:24.138421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.923 [2024-07-24 19:28:24.138430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.923 [2024-07-24 19:28:24.140917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.923 [2024-07-24 19:28:24.150005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.923 [2024-07-24 19:28:24.150365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.923 [2024-07-24 19:28:24.150383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:37.923 [2024-07-24 19:28:24.150393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:37.923 [2024-07-24 19:28:24.150559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:37.923 [2024-07-24 19:28:24.150731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.923 [2024-07-24 19:28:24.150742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.923 [2024-07-24 19:28:24.150752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.923 [2024-07-24 19:28:24.153230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.184 [2024-07-24 19:28:24.162845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.184 [2024-07-24 19:28:24.163279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.184 [2024-07-24 19:28:24.163330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.184 [2024-07-24 19:28:24.163363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.184 [2024-07-24 19:28:24.163972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.184 [2024-07-24 19:28:24.164565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.184 [2024-07-24 19:28:24.164577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.184 [2024-07-24 19:28:24.164587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.184 [2024-07-24 19:28:24.167186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.184 [2024-07-24 19:28:24.175633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.184 [2024-07-24 19:28:24.176041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.184 [2024-07-24 19:28:24.176093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.184 [2024-07-24 19:28:24.176126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.184 [2024-07-24 19:28:24.176558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.184 [2024-07-24 19:28:24.176723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.184 [2024-07-24 19:28:24.176734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.184 [2024-07-24 19:28:24.176743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.184 [2024-07-24 19:28:24.179283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.184 [2024-07-24 19:28:24.188503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.184 [2024-07-24 19:28:24.189024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.184 [2024-07-24 19:28:24.189075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.184 [2024-07-24 19:28:24.189107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.184 [2024-07-24 19:28:24.189590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.184 [2024-07-24 19:28:24.189841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.184 [2024-07-24 19:28:24.189856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.184 [2024-07-24 19:28:24.189870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.184 [2024-07-24 19:28:24.193609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.184 [2024-07-24 19:28:24.201811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.184 [2024-07-24 19:28:24.202267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.184 [2024-07-24 19:28:24.202318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.184 [2024-07-24 19:28:24.202350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.184 [2024-07-24 19:28:24.202859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.184 [2024-07-24 19:28:24.203027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.184 [2024-07-24 19:28:24.203039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.184 [2024-07-24 19:28:24.203048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.184 [2024-07-24 19:28:24.205563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.184 [2024-07-24 19:28:24.214634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.184 [2024-07-24 19:28:24.215069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.184 [2024-07-24 19:28:24.215119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.184 [2024-07-24 19:28:24.215151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.184 [2024-07-24 19:28:24.215642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.184 [2024-07-24 19:28:24.215824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.184 [2024-07-24 19:28:24.215835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.184 [2024-07-24 19:28:24.215845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.184 [2024-07-24 19:28:24.218366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.184 [2024-07-24 19:28:24.227339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.184 [2024-07-24 19:28:24.227847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.184 [2024-07-24 19:28:24.227866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.184 [2024-07-24 19:28:24.227876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.184 [2024-07-24 19:28:24.228047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.184 [2024-07-24 19:28:24.228217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.184 [2024-07-24 19:28:24.228229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.184 [2024-07-24 19:28:24.228239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.184 [2024-07-24 19:28:24.230910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.184 [2024-07-24 19:28:24.240285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.184 [2024-07-24 19:28:24.240797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.184 [2024-07-24 19:28:24.240852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.184 [2024-07-24 19:28:24.240885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.184 [2024-07-24 19:28:24.241477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.184 [2024-07-24 19:28:24.241678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.184 [2024-07-24 19:28:24.241690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.184 [2024-07-24 19:28:24.241698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.184 [2024-07-24 19:28:24.244296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.184 [2024-07-24 19:28:24.253169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.184 [2024-07-24 19:28:24.253676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.184 [2024-07-24 19:28:24.253743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.184 [2024-07-24 19:28:24.253777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.184 [2024-07-24 19:28:24.254368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.184 [2024-07-24 19:28:24.254776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.184 [2024-07-24 19:28:24.254788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.184 [2024-07-24 19:28:24.254800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.184 [2024-07-24 19:28:24.257394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.184 [2024-07-24 19:28:24.265911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.184 [2024-07-24 19:28:24.266415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.184 [2024-07-24 19:28:24.266466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.184 [2024-07-24 19:28:24.266499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.184 [2024-07-24 19:28:24.266975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.184 [2024-07-24 19:28:24.267138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.184 [2024-07-24 19:28:24.267149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.184 [2024-07-24 19:28:24.267158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.184 [2024-07-24 19:28:24.269616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.185 [2024-07-24 19:28:24.278730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.185 [2024-07-24 19:28:24.279236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.185 [2024-07-24 19:28:24.279253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.185 [2024-07-24 19:28:24.279263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.185 [2024-07-24 19:28:24.279419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.185 [2024-07-24 19:28:24.279577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.185 [2024-07-24 19:28:24.279587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.185 [2024-07-24 19:28:24.279596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.185 [2024-07-24 19:28:24.282205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.185 [2024-07-24 19:28:24.291619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.185 [2024-07-24 19:28:24.292050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.185 [2024-07-24 19:28:24.292068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.185 [2024-07-24 19:28:24.292077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.185 [2024-07-24 19:28:24.292242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.185 [2024-07-24 19:28:24.292409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.185 [2024-07-24 19:28:24.292420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.185 [2024-07-24 19:28:24.292429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.185 [2024-07-24 19:28:24.295044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.185 [2024-07-24 19:28:24.304425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.185 [2024-07-24 19:28:24.304919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.185 [2024-07-24 19:28:24.304970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.185 [2024-07-24 19:28:24.305003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.185 [2024-07-24 19:28:24.305592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.185 [2024-07-24 19:28:24.306112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.185 [2024-07-24 19:28:24.306123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.185 [2024-07-24 19:28:24.306133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.185 [2024-07-24 19:28:24.308644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.185 [2024-07-24 19:28:24.317224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.185 [2024-07-24 19:28:24.317691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.185 [2024-07-24 19:28:24.317709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.185 [2024-07-24 19:28:24.317722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.185 [2024-07-24 19:28:24.317903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.185 [2024-07-24 19:28:24.318070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.185 [2024-07-24 19:28:24.318081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.185 [2024-07-24 19:28:24.318090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.185 [2024-07-24 19:28:24.320596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.185 [2024-07-24 19:28:24.329982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.185 [2024-07-24 19:28:24.330504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.185 [2024-07-24 19:28:24.330555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.185 [2024-07-24 19:28:24.330588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.185 [2024-07-24 19:28:24.331072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.185 [2024-07-24 19:28:24.331261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.185 [2024-07-24 19:28:24.331277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.185 [2024-07-24 19:28:24.331290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.185 [2024-07-24 19:28:24.335026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.185 [2024-07-24 19:28:24.343232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.185 [2024-07-24 19:28:24.343724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.185 [2024-07-24 19:28:24.343742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.185 [2024-07-24 19:28:24.343751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.185 [2024-07-24 19:28:24.343908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.185 [2024-07-24 19:28:24.344067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.185 [2024-07-24 19:28:24.344078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.185 [2024-07-24 19:28:24.344087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.185 [2024-07-24 19:28:24.346631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.185 [2024-07-24 19:28:24.356004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.185 [2024-07-24 19:28:24.356421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.185 [2024-07-24 19:28:24.356438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.185 [2024-07-24 19:28:24.356447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.185 [2024-07-24 19:28:24.356603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.185 [2024-07-24 19:28:24.356783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.185 [2024-07-24 19:28:24.356794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.185 [2024-07-24 19:28:24.356805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.185 [2024-07-24 19:28:24.359330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.185 [2024-07-24 19:28:24.368762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.185 [2024-07-24 19:28:24.369256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.185 [2024-07-24 19:28:24.369274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.185 [2024-07-24 19:28:24.369283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.185 [2024-07-24 19:28:24.369439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.185 [2024-07-24 19:28:24.369597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.185 [2024-07-24 19:28:24.369607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.186 [2024-07-24 19:28:24.369615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.186 [2024-07-24 19:28:24.372139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.186 [2024-07-24 19:28:24.381501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.186 [2024-07-24 19:28:24.382016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.186 [2024-07-24 19:28:24.382073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.186 [2024-07-24 19:28:24.382105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.186 [2024-07-24 19:28:24.382697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.186 [2024-07-24 19:28:24.382908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.186 [2024-07-24 19:28:24.382920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.186 [2024-07-24 19:28:24.382929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.186 [2024-07-24 19:28:24.385444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.186 [2024-07-24 19:28:24.394145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.186 [2024-07-24 19:28:24.394577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.186 [2024-07-24 19:28:24.394595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.186 [2024-07-24 19:28:24.394604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.186 [2024-07-24 19:28:24.394783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.186 [2024-07-24 19:28:24.394950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.186 [2024-07-24 19:28:24.394962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.186 [2024-07-24 19:28:24.394971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.186 [2024-07-24 19:28:24.397486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.186 [2024-07-24 19:28:24.406971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.186 [2024-07-24 19:28:24.407386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.186 [2024-07-24 19:28:24.407437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.186 [2024-07-24 19:28:24.407469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.186 [2024-07-24 19:28:24.408076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.186 [2024-07-24 19:28:24.408362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.186 [2024-07-24 19:28:24.408374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.186 [2024-07-24 19:28:24.408383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.186 [2024-07-24 19:28:24.410928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.186 [2024-07-24 19:28:24.419813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.186 [2024-07-24 19:28:24.420321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.186 [2024-07-24 19:28:24.420373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.186 [2024-07-24 19:28:24.420406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.186 [2024-07-24 19:28:24.420867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.186 [2024-07-24 19:28:24.421034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.186 [2024-07-24 19:28:24.421045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.186 [2024-07-24 19:28:24.421054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.446 [2024-07-24 19:28:24.423647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.446 [2024-07-24 19:28:24.432533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.446 [2024-07-24 19:28:24.433023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-07-24 19:28:24.433077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.446 [2024-07-24 19:28:24.433117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.446 [2024-07-24 19:28:24.433707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.446 [2024-07-24 19:28:24.434171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.446 [2024-07-24 19:28:24.434182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.446 [2024-07-24 19:28:24.434191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.446 [2024-07-24 19:28:24.436647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.446 [2024-07-24 19:28:24.445256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.446 [2024-07-24 19:28:24.445750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-07-24 19:28:24.445769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.446 [2024-07-24 19:28:24.445779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.446 [2024-07-24 19:28:24.445951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.446 [2024-07-24 19:28:24.446108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.446 [2024-07-24 19:28:24.446119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.446 [2024-07-24 19:28:24.446129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.446 [2024-07-24 19:28:24.448588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.446 [2024-07-24 19:28:24.457987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.446 [2024-07-24 19:28:24.458502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-07-24 19:28:24.458520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.446 [2024-07-24 19:28:24.458529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.446 [2024-07-24 19:28:24.458686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.446 [2024-07-24 19:28:24.458848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.446 [2024-07-24 19:28:24.458860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.446 [2024-07-24 19:28:24.458868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.446 [2024-07-24 19:28:24.461372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.446 [2024-07-24 19:28:24.470724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.446 [2024-07-24 19:28:24.471210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-07-24 19:28:24.471261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.446 [2024-07-24 19:28:24.471294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.446 [2024-07-24 19:28:24.471904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.446 [2024-07-24 19:28:24.472422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.446 [2024-07-24 19:28:24.472439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.446 [2024-07-24 19:28:24.472449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.446 [2024-07-24 19:28:24.475978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.446 [2024-07-24 19:28:24.484164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.446 [2024-07-24 19:28:24.484659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-07-24 19:28:24.484677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.446 [2024-07-24 19:28:24.484687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.446 [2024-07-24 19:28:24.484878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.446 [2024-07-24 19:28:24.485049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.446 [2024-07-24 19:28:24.485061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.446 [2024-07-24 19:28:24.485071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.446 [2024-07-24 19:28:24.487743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.446 [2024-07-24 19:28:24.497009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.446 [2024-07-24 19:28:24.497512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-07-24 19:28:24.497530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.446 [2024-07-24 19:28:24.497540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.446 [2024-07-24 19:28:24.497710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.446 [2024-07-24 19:28:24.497895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.446 [2024-07-24 19:28:24.497907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.446 [2024-07-24 19:28:24.497915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.447 [2024-07-24 19:28:24.500700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.447 [2024-07-24 19:28:24.509899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.447 [2024-07-24 19:28:24.510381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-07-24 19:28:24.510399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.447 [2024-07-24 19:28:24.510410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.447 [2024-07-24 19:28:24.510575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.447 [2024-07-24 19:28:24.510747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.447 [2024-07-24 19:28:24.510759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.447 [2024-07-24 19:28:24.510768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.447 [2024-07-24 19:28:24.513363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.447 [2024-07-24 19:28:24.522653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.447 [2024-07-24 19:28:24.523085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-07-24 19:28:24.523137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.447 [2024-07-24 19:28:24.523170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.447 [2024-07-24 19:28:24.523594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.447 [2024-07-24 19:28:24.523758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.447 [2024-07-24 19:28:24.523769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.447 [2024-07-24 19:28:24.523778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.447 [2024-07-24 19:28:24.526237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.447 [2024-07-24 19:28:24.535377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.447 [2024-07-24 19:28:24.535780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-07-24 19:28:24.535798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.447 [2024-07-24 19:28:24.535807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.447 [2024-07-24 19:28:24.535964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.447 [2024-07-24 19:28:24.536121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.447 [2024-07-24 19:28:24.536132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.447 [2024-07-24 19:28:24.536140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.447 [2024-07-24 19:28:24.538684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.447 [2024-07-24 19:28:24.548053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.447 [2024-07-24 19:28:24.548482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-07-24 19:28:24.548534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.447 [2024-07-24 19:28:24.548567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.447 [2024-07-24 19:28:24.549187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.447 [2024-07-24 19:28:24.549571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.447 [2024-07-24 19:28:24.549583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.447 [2024-07-24 19:28:24.549592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.447 [2024-07-24 19:28:24.552077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.447 [2024-07-24 19:28:24.560784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.447 [2024-07-24 19:28:24.561266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-07-24 19:28:24.561284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.447 [2024-07-24 19:28:24.561293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.447 [2024-07-24 19:28:24.561453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.447 [2024-07-24 19:28:24.561610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.447 [2024-07-24 19:28:24.561621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.447 [2024-07-24 19:28:24.561630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.447 [2024-07-24 19:28:24.564177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.447 [2024-07-24 19:28:24.573558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.447 [2024-07-24 19:28:24.574067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-07-24 19:28:24.574085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.447 [2024-07-24 19:28:24.574095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.447 [2024-07-24 19:28:24.574252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.447 [2024-07-24 19:28:24.574408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.447 [2024-07-24 19:28:24.574419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.447 [2024-07-24 19:28:24.574427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.447 [2024-07-24 19:28:24.576974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.447 [2024-07-24 19:28:24.586257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.447 [2024-07-24 19:28:24.586753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-07-24 19:28:24.586804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.447 [2024-07-24 19:28:24.586837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.447 [2024-07-24 19:28:24.587282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.447 [2024-07-24 19:28:24.587440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.447 [2024-07-24 19:28:24.587451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.447 [2024-07-24 19:28:24.587461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.447 [2024-07-24 19:28:24.590010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.447 [2024-07-24 19:28:24.599004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.447 [2024-07-24 19:28:24.599498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-07-24 19:28:24.599551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.447 [2024-07-24 19:28:24.599584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.447 [2024-07-24 19:28:24.600191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.447 [2024-07-24 19:28:24.600624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.447 [2024-07-24 19:28:24.600635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.447 [2024-07-24 19:28:24.600648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.447 [2024-07-24 19:28:24.603129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.447 [2024-07-24 19:28:24.611772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.447 [2024-07-24 19:28:24.612245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-07-24 19:28:24.612296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.447 [2024-07-24 19:28:24.612329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.447 [2024-07-24 19:28:24.612787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.447 [2024-07-24 19:28:24.612945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.447 [2024-07-24 19:28:24.612956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.447 [2024-07-24 19:28:24.612965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.447 [2024-07-24 19:28:24.615423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.447 [2024-07-24 19:28:24.624415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.447 [2024-07-24 19:28:24.624778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-07-24 19:28:24.624796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.448 [2024-07-24 19:28:24.624805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.448 [2024-07-24 19:28:24.624961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.448 [2024-07-24 19:28:24.625118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.448 [2024-07-24 19:28:24.625130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.448 [2024-07-24 19:28:24.625139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.448 [2024-07-24 19:28:24.627683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.448 [2024-07-24 19:28:24.637101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.448 [2024-07-24 19:28:24.637598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-07-24 19:28:24.637648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.448 [2024-07-24 19:28:24.637680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.448 [2024-07-24 19:28:24.638340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.448 [2024-07-24 19:28:24.638679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.448 [2024-07-24 19:28:24.638690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.448 [2024-07-24 19:28:24.638700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.448 [2024-07-24 19:28:24.641232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.448 [2024-07-24 19:28:24.649798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.448 [2024-07-24 19:28:24.650207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-07-24 19:28:24.650225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.448 [2024-07-24 19:28:24.650234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.448 [2024-07-24 19:28:24.650391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.448 [2024-07-24 19:28:24.650548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.448 [2024-07-24 19:28:24.650559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.448 [2024-07-24 19:28:24.650567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.448 [2024-07-24 19:28:24.653115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.448 [2024-07-24 19:28:24.662478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.448 [2024-07-24 19:28:24.662988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-07-24 19:28:24.663041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.448 [2024-07-24 19:28:24.663073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.448 [2024-07-24 19:28:24.663522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.448 [2024-07-24 19:28:24.663681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.448 [2024-07-24 19:28:24.663692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.448 [2024-07-24 19:28:24.663700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.448 [2024-07-24 19:28:24.666246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.448 [2024-07-24 19:28:24.675183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.448 [2024-07-24 19:28:24.675671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-07-24 19:28:24.675688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.448 [2024-07-24 19:28:24.675697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.448 [2024-07-24 19:28:24.675882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.448 [2024-07-24 19:28:24.676048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.448 [2024-07-24 19:28:24.676059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.448 [2024-07-24 19:28:24.676069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.448 [2024-07-24 19:28:24.678580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.708 [2024-07-24 19:28:24.687973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.708 [2024-07-24 19:28:24.688449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.708 [2024-07-24 19:28:24.688467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.708 [2024-07-24 19:28:24.688477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.708 [2024-07-24 19:28:24.688646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.708 [2024-07-24 19:28:24.688817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.708 [2024-07-24 19:28:24.688829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.708 [2024-07-24 19:28:24.688839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.708 [2024-07-24 19:28:24.691439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.708 [2024-07-24 19:28:24.700664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.708 [2024-07-24 19:28:24.701168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.708 [2024-07-24 19:28:24.701220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.708 [2024-07-24 19:28:24.701253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.708 [2024-07-24 19:28:24.701663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.708 [2024-07-24 19:28:24.701847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.708 [2024-07-24 19:28:24.701857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.708 [2024-07-24 19:28:24.701866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.708 [2024-07-24 19:28:24.704386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.708 [2024-07-24 19:28:24.713579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.708 [2024-07-24 19:28:24.714088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.708 [2024-07-24 19:28:24.714106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.708 [2024-07-24 19:28:24.714116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.708 [2024-07-24 19:28:24.714287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.708 [2024-07-24 19:28:24.714457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.709 [2024-07-24 19:28:24.714469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.709 [2024-07-24 19:28:24.714479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.709 [2024-07-24 19:28:24.717151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.709 [2024-07-24 19:28:24.726485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.709 [2024-07-24 19:28:24.726989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.709 [2024-07-24 19:28:24.727041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.709 [2024-07-24 19:28:24.727074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.709 [2024-07-24 19:28:24.727665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.709 [2024-07-24 19:28:24.728142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.709 [2024-07-24 19:28:24.728153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.709 [2024-07-24 19:28:24.728164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.709 [2024-07-24 19:28:24.730764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.709 [2024-07-24 19:28:24.739152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.709 [2024-07-24 19:28:24.739644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.709 [2024-07-24 19:28:24.739661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.709 [2024-07-24 19:28:24.739671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.709 [2024-07-24 19:28:24.739861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.709 [2024-07-24 19:28:24.740032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.709 [2024-07-24 19:28:24.740043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.709 [2024-07-24 19:28:24.740052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.709 [2024-07-24 19:28:24.742709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.709 [2024-07-24 19:28:24.751803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.709 [2024-07-24 19:28:24.752307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.709 [2024-07-24 19:28:24.752357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.709 [2024-07-24 19:28:24.752389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.709 [2024-07-24 19:28:24.752789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.709 [2024-07-24 19:28:24.752956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.709 [2024-07-24 19:28:24.752968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.709 [2024-07-24 19:28:24.752977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.709 [2024-07-24 19:28:24.755571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.709 [2024-07-24 19:28:24.764550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.709 [2024-07-24 19:28:24.765032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.709 [2024-07-24 19:28:24.765083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.709 [2024-07-24 19:28:24.765117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.709 [2024-07-24 19:28:24.765705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.709 [2024-07-24 19:28:24.765933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.709 [2024-07-24 19:28:24.765944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.709 [2024-07-24 19:28:24.765954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.709 [2024-07-24 19:28:24.768471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.709 [2024-07-24 19:28:24.777310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.709 [2024-07-24 19:28:24.777827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.709 [2024-07-24 19:28:24.777885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.709 [2024-07-24 19:28:24.777918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.709 [2024-07-24 19:28:24.778246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.709 [2024-07-24 19:28:24.778404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.709 [2024-07-24 19:28:24.778415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.709 [2024-07-24 19:28:24.778424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.709 [2024-07-24 19:28:24.781020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.709 [2024-07-24 19:28:24.790004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.709 [2024-07-24 19:28:24.790494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.709 [2024-07-24 19:28:24.790511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.709 [2024-07-24 19:28:24.790521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.709 [2024-07-24 19:28:24.790677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.709 [2024-07-24 19:28:24.790860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.709 [2024-07-24 19:28:24.790872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.709 [2024-07-24 19:28:24.790880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.709 [2024-07-24 19:28:24.793403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.709 [2024-07-24 19:28:24.802710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.709 [2024-07-24 19:28:24.803128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.709 [2024-07-24 19:28:24.803179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.709 [2024-07-24 19:28:24.803211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.709 [2024-07-24 19:28:24.803815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.709 [2024-07-24 19:28:24.804266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.709 [2024-07-24 19:28:24.804277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.709 [2024-07-24 19:28:24.804286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.709 [2024-07-24 19:28:24.806786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.709 [2024-07-24 19:28:24.815426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.709 [2024-07-24 19:28:24.815894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.709 [2024-07-24 19:28:24.815912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.709 [2024-07-24 19:28:24.815921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.709 [2024-07-24 19:28:24.816078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.709 [2024-07-24 19:28:24.816238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.709 [2024-07-24 19:28:24.816250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.709 [2024-07-24 19:28:24.816258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.709 [2024-07-24 19:28:24.818807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.709 [2024-07-24 19:28:24.828173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.709 [2024-07-24 19:28:24.828661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.709 [2024-07-24 19:28:24.828712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.709 [2024-07-24 19:28:24.828760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.709 [2024-07-24 19:28:24.829210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.709 [2024-07-24 19:28:24.829377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.709 [2024-07-24 19:28:24.829388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.709 [2024-07-24 19:28:24.829397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.709 [2024-07-24 19:28:24.831894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.709 [2024-07-24 19:28:24.840913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.710 [2024-07-24 19:28:24.841419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.710 [2024-07-24 19:28:24.841471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.710 [2024-07-24 19:28:24.841504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.710 [2024-07-24 19:28:24.842108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.710 [2024-07-24 19:28:24.842443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.710 [2024-07-24 19:28:24.842454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.710 [2024-07-24 19:28:24.842464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.710 [2024-07-24 19:28:24.844955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.710 [2024-07-24 19:28:24.853682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.710 [2024-07-24 19:28:24.854120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.710 [2024-07-24 19:28:24.854172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.710 [2024-07-24 19:28:24.854206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.710 [2024-07-24 19:28:24.854870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.710 [2024-07-24 19:28:24.855262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.710 [2024-07-24 19:28:24.855273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.710 [2024-07-24 19:28:24.855282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.710 [2024-07-24 19:28:24.857745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.710 [2024-07-24 19:28:24.866450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.710 [2024-07-24 19:28:24.866921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.710 [2024-07-24 19:28:24.866940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.710 [2024-07-24 19:28:24.866949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.710 [2024-07-24 19:28:24.867106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.710 [2024-07-24 19:28:24.867263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.710 [2024-07-24 19:28:24.867273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.710 [2024-07-24 19:28:24.867282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.710 [2024-07-24 19:28:24.869835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.710 [2024-07-24 19:28:24.879204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.710 [2024-07-24 19:28:24.879636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.710 [2024-07-24 19:28:24.879687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.710 [2024-07-24 19:28:24.879737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.710 [2024-07-24 19:28:24.880133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.710 [2024-07-24 19:28:24.880299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.710 [2024-07-24 19:28:24.880310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.710 [2024-07-24 19:28:24.880319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.710 [2024-07-24 19:28:24.882818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.710 [2024-07-24 19:28:24.891871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.710 [2024-07-24 19:28:24.892380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.710 [2024-07-24 19:28:24.892431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.710 [2024-07-24 19:28:24.892464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.710 [2024-07-24 19:28:24.893069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.710 [2024-07-24 19:28:24.893486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.710 [2024-07-24 19:28:24.893497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.710 [2024-07-24 19:28:24.893507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.710 [2024-07-24 19:28:24.896046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.710 [2024-07-24 19:28:24.904535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.710 [2024-07-24 19:28:24.905050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.710 [2024-07-24 19:28:24.905103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.710 [2024-07-24 19:28:24.905142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.710 [2024-07-24 19:28:24.905555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.710 [2024-07-24 19:28:24.905719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.710 [2024-07-24 19:28:24.905731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.710 [2024-07-24 19:28:24.905740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.710 [2024-07-24 19:28:24.908280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.710 [2024-07-24 19:28:24.917271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.710 [2024-07-24 19:28:24.917742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.710 [2024-07-24 19:28:24.917787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.710 [2024-07-24 19:28:24.917820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.710 [2024-07-24 19:28:24.918366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.710 [2024-07-24 19:28:24.918524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.710 [2024-07-24 19:28:24.918535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.710 [2024-07-24 19:28:24.918544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.710 [2024-07-24 19:28:24.921092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.710 [2024-07-24 19:28:24.929944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.710 [2024-07-24 19:28:24.930449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.710 [2024-07-24 19:28:24.930500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.710 [2024-07-24 19:28:24.930533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.710 [2024-07-24 19:28:24.931137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.710 [2024-07-24 19:28:24.931584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.710 [2024-07-24 19:28:24.931595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.710 [2024-07-24 19:28:24.931605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.710 [2024-07-24 19:28:24.934090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.710 [2024-07-24 19:28:24.942734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.710 [2024-07-24 19:28:24.943265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.710 [2024-07-24 19:28:24.943316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.710 [2024-07-24 19:28:24.943349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.710 [2024-07-24 19:28:24.943675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.710 [2024-07-24 19:28:24.943850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.710 [2024-07-24 19:28:24.943862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.711 [2024-07-24 19:28:24.943874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.971 [2024-07-24 19:28:24.946467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.971 [2024-07-24 19:28:24.955462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.971 [2024-07-24 19:28:24.955964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.971 [2024-07-24 19:28:24.956016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.971 [2024-07-24 19:28:24.956049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.971 [2024-07-24 19:28:24.956473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.971 [2024-07-24 19:28:24.956631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.971 [2024-07-24 19:28:24.956641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.971 [2024-07-24 19:28:24.956650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.971 [2024-07-24 19:28:24.959197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.971 [2024-07-24 19:28:24.968187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.971 [2024-07-24 19:28:24.968673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.971 [2024-07-24 19:28:24.968736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.971 [2024-07-24 19:28:24.968769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.971 [2024-07-24 19:28:24.969253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.971 [2024-07-24 19:28:24.969416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.971 [2024-07-24 19:28:24.969426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.971 [2024-07-24 19:28:24.969436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.971 [2024-07-24 19:28:24.971985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.971 [2024-07-24 19:28:24.980922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.971 [2024-07-24 19:28:24.981406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.971 [2024-07-24 19:28:24.981423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.971 [2024-07-24 19:28:24.981433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.972 [2024-07-24 19:28:24.981591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.972 [2024-07-24 19:28:24.981754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.972 [2024-07-24 19:28:24.981766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.972 [2024-07-24 19:28:24.981777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.972 [2024-07-24 19:28:24.984325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.972 [2024-07-24 19:28:24.993640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.972 [2024-07-24 19:28:24.994040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.972 [2024-07-24 19:28:24.994058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.972 [2024-07-24 19:28:24.994068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.972 [2024-07-24 19:28:24.994234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.972 [2024-07-24 19:28:24.994399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.972 [2024-07-24 19:28:24.994411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.972 [2024-07-24 19:28:24.994420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.972 [2024-07-24 19:28:24.997102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.972 [2024-07-24 19:28:25.006517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.972 [2024-07-24 19:28:25.006937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.972 [2024-07-24 19:28:25.006955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.972 [2024-07-24 19:28:25.006965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.972 [2024-07-24 19:28:25.007130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.972 [2024-07-24 19:28:25.007297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.972 [2024-07-24 19:28:25.007308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.972 [2024-07-24 19:28:25.007318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.972 [2024-07-24 19:28:25.009947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.972 [2024-07-24 19:28:25.019445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.972 [2024-07-24 19:28:25.019851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.972 [2024-07-24 19:28:25.019869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.972 [2024-07-24 19:28:25.019879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.972 [2024-07-24 19:28:25.020044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.972 [2024-07-24 19:28:25.020210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.972 [2024-07-24 19:28:25.020221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.972 [2024-07-24 19:28:25.020230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.972 [2024-07-24 19:28:25.022839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.972 [2024-07-24 19:28:25.032208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.972 [2024-07-24 19:28:25.032679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.972 [2024-07-24 19:28:25.032697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.972 [2024-07-24 19:28:25.032706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.972 [2024-07-24 19:28:25.032894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.972 [2024-07-24 19:28:25.033066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.972 [2024-07-24 19:28:25.033077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.972 [2024-07-24 19:28:25.033087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.972 [2024-07-24 19:28:25.035595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.972 [2024-07-24 19:28:25.044966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.972 [2024-07-24 19:28:25.045459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.972 [2024-07-24 19:28:25.045476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.972 [2024-07-24 19:28:25.045485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.972 [2024-07-24 19:28:25.045642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.972 [2024-07-24 19:28:25.045824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.972 [2024-07-24 19:28:25.045836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.972 [2024-07-24 19:28:25.045844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.972 [2024-07-24 19:28:25.048364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.972 [2024-07-24 19:28:25.057705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.972 [2024-07-24 19:28:25.058117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.972 [2024-07-24 19:28:25.058169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.972 [2024-07-24 19:28:25.058202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.972 [2024-07-24 19:28:25.058705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.972 [2024-07-24 19:28:25.058889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.972 [2024-07-24 19:28:25.058900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.972 [2024-07-24 19:28:25.058910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.972 [2024-07-24 19:28:25.061427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.972 [2024-07-24 19:28:25.070367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.972 [2024-07-24 19:28:25.070869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.972 [2024-07-24 19:28:25.070921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.972 [2024-07-24 19:28:25.070954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.972 [2024-07-24 19:28:25.071546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.972 [2024-07-24 19:28:25.071869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.972 [2024-07-24 19:28:25.071881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.972 [2024-07-24 19:28:25.071890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.972 [2024-07-24 19:28:25.074412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.972 [2024-07-24 19:28:25.083056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.972 [2024-07-24 19:28:25.083391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.972 [2024-07-24 19:28:25.083408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.972 [2024-07-24 19:28:25.083417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.972 [2024-07-24 19:28:25.083573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.972 [2024-07-24 19:28:25.083734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.972 [2024-07-24 19:28:25.083745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.972 [2024-07-24 19:28:25.083754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.972 [2024-07-24 19:28:25.086296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.972 [2024-07-24 19:28:25.095756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.972 [2024-07-24 19:28:25.096249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.972 [2024-07-24 19:28:25.096267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.972 [2024-07-24 19:28:25.096276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.972 [2024-07-24 19:28:25.096434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.972 [2024-07-24 19:28:25.096591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.972 [2024-07-24 19:28:25.096601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.972 [2024-07-24 19:28:25.096610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.973 [2024-07-24 19:28:25.099156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.973 [2024-07-24 19:28:25.108440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.973 [2024-07-24 19:28:25.108945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.973 [2024-07-24 19:28:25.108998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.973 [2024-07-24 19:28:25.109030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.973 [2024-07-24 19:28:25.109622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.973 [2024-07-24 19:28:25.110045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.973 [2024-07-24 19:28:25.110057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.973 [2024-07-24 19:28:25.110066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.973 [2024-07-24 19:28:25.112576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.973 [2024-07-24 19:28:25.121218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.973 [2024-07-24 19:28:25.121707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.973 [2024-07-24 19:28:25.121735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.973 [2024-07-24 19:28:25.121745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.973 [2024-07-24 19:28:25.121901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.973 [2024-07-24 19:28:25.122059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.973 [2024-07-24 19:28:25.122070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.973 [2024-07-24 19:28:25.122078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.973 [2024-07-24 19:28:25.124537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.973 [2024-07-24 19:28:25.133906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.973 [2024-07-24 19:28:25.134394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.973 [2024-07-24 19:28:25.134412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.973 [2024-07-24 19:28:25.134421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.973 [2024-07-24 19:28:25.134578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.973 [2024-07-24 19:28:25.134740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.973 [2024-07-24 19:28:25.134767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.973 [2024-07-24 19:28:25.134776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.973 [2024-07-24 19:28:25.137245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.973 [2024-07-24 19:28:25.146672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.973 [2024-07-24 19:28:25.147164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.973 [2024-07-24 19:28:25.147181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.973 [2024-07-24 19:28:25.147191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.973 [2024-07-24 19:28:25.147348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.973 [2024-07-24 19:28:25.147504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.973 [2024-07-24 19:28:25.147515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.973 [2024-07-24 19:28:25.147523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.973 [2024-07-24 19:28:25.150076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.973 [2024-07-24 19:28:25.159443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.973 [2024-07-24 19:28:25.159927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.973 [2024-07-24 19:28:25.159944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.973 [2024-07-24 19:28:25.159953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.973 [2024-07-24 19:28:25.160111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.973 [2024-07-24 19:28:25.160271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.973 [2024-07-24 19:28:25.160282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.973 [2024-07-24 19:28:25.160290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.973 [2024-07-24 19:28:25.162836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.973 [2024-07-24 19:28:25.172204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.973 [2024-07-24 19:28:25.172706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.973 [2024-07-24 19:28:25.172770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.973 [2024-07-24 19:28:25.172803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.973 [2024-07-24 19:28:25.173184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.973 [2024-07-24 19:28:25.173342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.973 [2024-07-24 19:28:25.173353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.973 [2024-07-24 19:28:25.173362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.973 [2024-07-24 19:28:25.175907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.973 [2024-07-24 19:28:25.184900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.973 [2024-07-24 19:28:25.185404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.973 [2024-07-24 19:28:25.185455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.973 [2024-07-24 19:28:25.185488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.973 [2024-07-24 19:28:25.185964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.973 [2024-07-24 19:28:25.186124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.973 [2024-07-24 19:28:25.186135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.973 [2024-07-24 19:28:25.186144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.973 [2024-07-24 19:28:25.188601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.973 [2024-07-24 19:28:25.197680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.973 [2024-07-24 19:28:25.198190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.973 [2024-07-24 19:28:25.198242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:38.973 [2024-07-24 19:28:25.198274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:38.973 [2024-07-24 19:28:25.198881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:38.973 [2024-07-24 19:28:25.199263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.973 [2024-07-24 19:28:25.199274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.973 [2024-07-24 19:28:25.199283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.973 [2024-07-24 19:28:25.201782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.234 [2024-07-24 19:28:25.210479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.234 [2024-07-24 19:28:25.210906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.234 [2024-07-24 19:28:25.210970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.234 [2024-07-24 19:28:25.211003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.234 [2024-07-24 19:28:25.211594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.234 [2024-07-24 19:28:25.212031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.234 [2024-07-24 19:28:25.212042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.234 [2024-07-24 19:28:25.212050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.234 [2024-07-24 19:28:25.214572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.234 [2024-07-24 19:28:25.223188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.234 [2024-07-24 19:28:25.223672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.234 [2024-07-24 19:28:25.223740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.234 [2024-07-24 19:28:25.223773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.234 [2024-07-24 19:28:25.224174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.234 [2024-07-24 19:28:25.224333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.234 [2024-07-24 19:28:25.224344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.234 [2024-07-24 19:28:25.224352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.234 [2024-07-24 19:28:25.226834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.234 [2024-07-24 19:28:25.235972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.234 [2024-07-24 19:28:25.236460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.234 [2024-07-24 19:28:25.236478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.234 [2024-07-24 19:28:25.236487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.234 [2024-07-24 19:28:25.236644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.234 [2024-07-24 19:28:25.236806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.234 [2024-07-24 19:28:25.236818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.234 [2024-07-24 19:28:25.236826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.234 [2024-07-24 19:28:25.239329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.234 [2024-07-24 19:28:25.248885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.234 [2024-07-24 19:28:25.249263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.234 [2024-07-24 19:28:25.249281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.234 [2024-07-24 19:28:25.249293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.234 [2024-07-24 19:28:25.249459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.234 [2024-07-24 19:28:25.249625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.234 [2024-07-24 19:28:25.249637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.234 [2024-07-24 19:28:25.249647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.234 [2024-07-24 19:28:25.252317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.234 [2024-07-24 19:28:25.261774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.234 [2024-07-24 19:28:25.262174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.234 [2024-07-24 19:28:25.262191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.234 [2024-07-24 19:28:25.262201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.234 [2024-07-24 19:28:25.262367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.234 [2024-07-24 19:28:25.262532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.235 [2024-07-24 19:28:25.262543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.235 [2024-07-24 19:28:25.262552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.235 [2024-07-24 19:28:25.265269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.235 [2024-07-24 19:28:25.274630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.235 [2024-07-24 19:28:25.275045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.235 [2024-07-24 19:28:25.275065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.235 [2024-07-24 19:28:25.275075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.235 [2024-07-24 19:28:25.275241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.235 [2024-07-24 19:28:25.275407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.235 [2024-07-24 19:28:25.275420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.235 [2024-07-24 19:28:25.275431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.235 [2024-07-24 19:28:25.278035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.235 [2024-07-24 19:28:25.287548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.235 [2024-07-24 19:28:25.287974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.235 [2024-07-24 19:28:25.287992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.235 [2024-07-24 19:28:25.288003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.235 [2024-07-24 19:28:25.288173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.235 [2024-07-24 19:28:25.288343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.235 [2024-07-24 19:28:25.288355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.235 [2024-07-24 19:28:25.288367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.235 [2024-07-24 19:28:25.291053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.235 [2024-07-24 19:28:25.300455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.235 [2024-07-24 19:28:25.300864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.235 [2024-07-24 19:28:25.300883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.235 [2024-07-24 19:28:25.300894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.235 [2024-07-24 19:28:25.301059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.235 [2024-07-24 19:28:25.301225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.235 [2024-07-24 19:28:25.301237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.235 [2024-07-24 19:28:25.301246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.235 [2024-07-24 19:28:25.303825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.235 [2024-07-24 19:28:25.313327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.235 [2024-07-24 19:28:25.313703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.235 [2024-07-24 19:28:25.313769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.235 [2024-07-24 19:28:25.313802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.235 [2024-07-24 19:28:25.314359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.235 [2024-07-24 19:28:25.314527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.235 [2024-07-24 19:28:25.314538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.235 [2024-07-24 19:28:25.314549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.235 [2024-07-24 19:28:25.317201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.235 [2024-07-24 19:28:25.326345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.235 [2024-07-24 19:28:25.326777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.235 [2024-07-24 19:28:25.326796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.235 [2024-07-24 19:28:25.326806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.235 [2024-07-24 19:28:25.326977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.235 [2024-07-24 19:28:25.327148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.235 [2024-07-24 19:28:25.327160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.235 [2024-07-24 19:28:25.327169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.235 [2024-07-24 19:28:25.329845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.235 [2024-07-24 19:28:25.339302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.235 [2024-07-24 19:28:25.339665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.235 [2024-07-24 19:28:25.339683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.235 [2024-07-24 19:28:25.339693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.235 [2024-07-24 19:28:25.339869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.235 [2024-07-24 19:28:25.340040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.235 [2024-07-24 19:28:25.340052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.235 [2024-07-24 19:28:25.340061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.235 [2024-07-24 19:28:25.342732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.235 [2024-07-24 19:28:25.352197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.235 [2024-07-24 19:28:25.352685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.235 [2024-07-24 19:28:25.352703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.235 [2024-07-24 19:28:25.352719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.235 [2024-07-24 19:28:25.352890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.235 [2024-07-24 19:28:25.353061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.235 [2024-07-24 19:28:25.353073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.235 [2024-07-24 19:28:25.353083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.235 [2024-07-24 19:28:25.355760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.235 [2024-07-24 19:28:25.365216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.235 [2024-07-24 19:28:25.365727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.235 [2024-07-24 19:28:25.365745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.235 [2024-07-24 19:28:25.365755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.235 [2024-07-24 19:28:25.365925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.235 [2024-07-24 19:28:25.366096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.235 [2024-07-24 19:28:25.366107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.235 [2024-07-24 19:28:25.366116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.235 [2024-07-24 19:28:25.368801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.235 [2024-07-24 19:28:25.378120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.235 [2024-07-24 19:28:25.378608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.235 [2024-07-24 19:28:25.378626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.235 [2024-07-24 19:28:25.378636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.235 [2024-07-24 19:28:25.378816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.235 [2024-07-24 19:28:25.378987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.235 [2024-07-24 19:28:25.378999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.235 [2024-07-24 19:28:25.379008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.235 [2024-07-24 19:28:25.381675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.235 [2024-07-24 19:28:25.390994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.235 [2024-07-24 19:28:25.391499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.236 [2024-07-24 19:28:25.391517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.236 [2024-07-24 19:28:25.391527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.236 [2024-07-24 19:28:25.391697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.236 [2024-07-24 19:28:25.391876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.236 [2024-07-24 19:28:25.391888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.236 [2024-07-24 19:28:25.391897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.236 [2024-07-24 19:28:25.394567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.236 [2024-07-24 19:28:25.403904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.236 [2024-07-24 19:28:25.404347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.236 [2024-07-24 19:28:25.404366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.236 [2024-07-24 19:28:25.404376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.236 [2024-07-24 19:28:25.404546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.236 [2024-07-24 19:28:25.404723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.236 [2024-07-24 19:28:25.404734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.236 [2024-07-24 19:28:25.404744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.236 [2024-07-24 19:28:25.407413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.236 [2024-07-24 19:28:25.416869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.236 [2024-07-24 19:28:25.417278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.236 [2024-07-24 19:28:25.417296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.236 [2024-07-24 19:28:25.417307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.236 [2024-07-24 19:28:25.417476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.236 [2024-07-24 19:28:25.417647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.236 [2024-07-24 19:28:25.417658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.236 [2024-07-24 19:28:25.417671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.236 [2024-07-24 19:28:25.420347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.236 [2024-07-24 19:28:25.429807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.236 [2024-07-24 19:28:25.430297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.236 [2024-07-24 19:28:25.430315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.236 [2024-07-24 19:28:25.430325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.236 [2024-07-24 19:28:25.430495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.236 [2024-07-24 19:28:25.430665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.236 [2024-07-24 19:28:25.430677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.236 [2024-07-24 19:28:25.430686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.236 [2024-07-24 19:28:25.433360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.236 [2024-07-24 19:28:25.442817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.236 [2024-07-24 19:28:25.443198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.236 [2024-07-24 19:28:25.443216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.236 [2024-07-24 19:28:25.443226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.236 [2024-07-24 19:28:25.443395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.236 [2024-07-24 19:28:25.443565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.236 [2024-07-24 19:28:25.443577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.236 [2024-07-24 19:28:25.443586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.236 [2024-07-24 19:28:25.446259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.236 [2024-07-24 19:28:25.455723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.236 [2024-07-24 19:28:25.456209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.236 [2024-07-24 19:28:25.456228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.236 [2024-07-24 19:28:25.456238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.236 [2024-07-24 19:28:25.456408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.236 [2024-07-24 19:28:25.456579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.236 [2024-07-24 19:28:25.456590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.236 [2024-07-24 19:28:25.456601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.236 [2024-07-24 19:28:25.459275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.236 [2024-07-24 19:28:25.468731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.236 [2024-07-24 19:28:25.469258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.236 [2024-07-24 19:28:25.469279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.236 [2024-07-24 19:28:25.469289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.236 [2024-07-24 19:28:25.469459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.236 [2024-07-24 19:28:25.469629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.236 [2024-07-24 19:28:25.469640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.236 [2024-07-24 19:28:25.469650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.497 [2024-07-24 19:28:25.472353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.497 [2024-07-24 19:28:25.481654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.497 [2024-07-24 19:28:25.482159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.497 [2024-07-24 19:28:25.482178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.497 [2024-07-24 19:28:25.482188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.497 [2024-07-24 19:28:25.482358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.497 [2024-07-24 19:28:25.482528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.497 [2024-07-24 19:28:25.482540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.497 [2024-07-24 19:28:25.482549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.497 [2024-07-24 19:28:25.485222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.497 [2024-07-24 19:28:25.494528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.497 [2024-07-24 19:28:25.494919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.497 [2024-07-24 19:28:25.494938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.497 [2024-07-24 19:28:25.494948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.497 [2024-07-24 19:28:25.495119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.497 [2024-07-24 19:28:25.495290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.497 [2024-07-24 19:28:25.495301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.497 [2024-07-24 19:28:25.495310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.497 [2024-07-24 19:28:25.497982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.497 [2024-07-24 19:28:25.507436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.497 [2024-07-24 19:28:25.507869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.497 [2024-07-24 19:28:25.507888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.497 [2024-07-24 19:28:25.507898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.497 [2024-07-24 19:28:25.508068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.497 [2024-07-24 19:28:25.508241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.497 [2024-07-24 19:28:25.508253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.497 [2024-07-24 19:28:25.508262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.497 [2024-07-24 19:28:25.510934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.497 [2024-07-24 19:28:25.520396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.497 [2024-07-24 19:28:25.520840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.497 [2024-07-24 19:28:25.520859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.497 [2024-07-24 19:28:25.520869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.497 [2024-07-24 19:28:25.521038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.497 [2024-07-24 19:28:25.521209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.498 [2024-07-24 19:28:25.521220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.498 [2024-07-24 19:28:25.521229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.498 [2024-07-24 19:28:25.523927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.498 [2024-07-24 19:28:25.533383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.498 [2024-07-24 19:28:25.533790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.498 [2024-07-24 19:28:25.533809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.498 [2024-07-24 19:28:25.533819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.498 [2024-07-24 19:28:25.533989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.498 [2024-07-24 19:28:25.534159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.498 [2024-07-24 19:28:25.534171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.498 [2024-07-24 19:28:25.534181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.498 [2024-07-24 19:28:25.537050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.498 [2024-07-24 19:28:25.546356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.498 [2024-07-24 19:28:25.546841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.498 [2024-07-24 19:28:25.546860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.498 [2024-07-24 19:28:25.546871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.498 [2024-07-24 19:28:25.547041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.498 [2024-07-24 19:28:25.547211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.498 [2024-07-24 19:28:25.547223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.498 [2024-07-24 19:28:25.547232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.498 [2024-07-24 19:28:25.549884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.498 [2024-07-24 19:28:25.559361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.498 [2024-07-24 19:28:25.559871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.498 [2024-07-24 19:28:25.559890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.498 [2024-07-24 19:28:25.559901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.498 [2024-07-24 19:28:25.560071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.498 [2024-07-24 19:28:25.560241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.498 [2024-07-24 19:28:25.560253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.498 [2024-07-24 19:28:25.560262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.498 [2024-07-24 19:28:25.562935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.498 [2024-07-24 19:28:25.572241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.498 [2024-07-24 19:28:25.572746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.498 [2024-07-24 19:28:25.572765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.498 [2024-07-24 19:28:25.572775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.498 [2024-07-24 19:28:25.572946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.498 [2024-07-24 19:28:25.573116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.498 [2024-07-24 19:28:25.573128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.498 [2024-07-24 19:28:25.573138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.498 [2024-07-24 19:28:25.575811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.498 [2024-07-24 19:28:25.585265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.498 [2024-07-24 19:28:25.585629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.498 [2024-07-24 19:28:25.585648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.498 [2024-07-24 19:28:25.585658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.498 [2024-07-24 19:28:25.585835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.498 [2024-07-24 19:28:25.586006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.498 [2024-07-24 19:28:25.586017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.498 [2024-07-24 19:28:25.586026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.498 [2024-07-24 19:28:25.588696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.498 [2024-07-24 19:28:25.598160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.498 [2024-07-24 19:28:25.598661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.498 [2024-07-24 19:28:25.598680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.498 [2024-07-24 19:28:25.598693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.498 [2024-07-24 19:28:25.598870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.498 [2024-07-24 19:28:25.599040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.498 [2024-07-24 19:28:25.599052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.498 [2024-07-24 19:28:25.599061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.498 [2024-07-24 19:28:25.601731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.498 [2024-07-24 19:28:25.611032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.498 [2024-07-24 19:28:25.611536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.498 [2024-07-24 19:28:25.611555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.498 [2024-07-24 19:28:25.611565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.498 [2024-07-24 19:28:25.611738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.498 [2024-07-24 19:28:25.611908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.498 [2024-07-24 19:28:25.611920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.498 [2024-07-24 19:28:25.611929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.498 [2024-07-24 19:28:25.614597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.498 [2024-07-24 19:28:25.624054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.498 [2024-07-24 19:28:25.624556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.498 [2024-07-24 19:28:25.624574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.498 [2024-07-24 19:28:25.624584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.498 [2024-07-24 19:28:25.624761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.498 [2024-07-24 19:28:25.624931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.498 [2024-07-24 19:28:25.624942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.498 [2024-07-24 19:28:25.624951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.498 [2024-07-24 19:28:25.627619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.498 [2024-07-24 19:28:25.637075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.498 [2024-07-24 19:28:25.637559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.498 [2024-07-24 19:28:25.637578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.498 [2024-07-24 19:28:25.637588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.498 [2024-07-24 19:28:25.637764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.498 [2024-07-24 19:28:25.637935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.498 [2024-07-24 19:28:25.637949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.498 [2024-07-24 19:28:25.637958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.498 [2024-07-24 19:28:25.640625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.499 [2024-07-24 19:28:25.650091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.499 [2024-07-24 19:28:25.650521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.499 [2024-07-24 19:28:25.650539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.499 [2024-07-24 19:28:25.650549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.499 [2024-07-24 19:28:25.650724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.499 [2024-07-24 19:28:25.650895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.499 [2024-07-24 19:28:25.650906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.499 [2024-07-24 19:28:25.650915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.499 [2024-07-24 19:28:25.653582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.499 [2024-07-24 19:28:25.663037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.499 [2024-07-24 19:28:25.663470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.499 [2024-07-24 19:28:25.663488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.499 [2024-07-24 19:28:25.663498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.499 [2024-07-24 19:28:25.663669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.499 [2024-07-24 19:28:25.663846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.499 [2024-07-24 19:28:25.663858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.499 [2024-07-24 19:28:25.663867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.499 [2024-07-24 19:28:25.666534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.499 [2024-07-24 19:28:25.676002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.499 [2024-07-24 19:28:25.676502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.499 [2024-07-24 19:28:25.676521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.499 [2024-07-24 19:28:25.676530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.499 [2024-07-24 19:28:25.676701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.499 [2024-07-24 19:28:25.676877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.499 [2024-07-24 19:28:25.676888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.499 [2024-07-24 19:28:25.676897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.499 [2024-07-24 19:28:25.679584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.499 [2024-07-24 19:28:25.688886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.499 [2024-07-24 19:28:25.689392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.499 [2024-07-24 19:28:25.689411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.499 [2024-07-24 19:28:25.689420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.499 [2024-07-24 19:28:25.689590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.499 [2024-07-24 19:28:25.689767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.499 [2024-07-24 19:28:25.689778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.499 [2024-07-24 19:28:25.689787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.499 [2024-07-24 19:28:25.692456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.499 [2024-07-24 19:28:25.701913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.499 [2024-07-24 19:28:25.702349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.499 [2024-07-24 19:28:25.702368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.499 [2024-07-24 19:28:25.702378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.499 [2024-07-24 19:28:25.702547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.499 [2024-07-24 19:28:25.702723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.499 [2024-07-24 19:28:25.702736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.499 [2024-07-24 19:28:25.702745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.499 [2024-07-24 19:28:25.705412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.499 [2024-07-24 19:28:25.714882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.499 [2024-07-24 19:28:25.715344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.499 [2024-07-24 19:28:25.715396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.499 [2024-07-24 19:28:25.715428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.499 [2024-07-24 19:28:25.715985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.499 [2024-07-24 19:28:25.716224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.499 [2024-07-24 19:28:25.716239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.499 [2024-07-24 19:28:25.716253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.499 [2024-07-24 19:28:25.719997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.499 [2024-07-24 19:28:25.728426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.499 [2024-07-24 19:28:25.728869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.499 [2024-07-24 19:28:25.728921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.499 [2024-07-24 19:28:25.728953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.499 [2024-07-24 19:28:25.729141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.499 [2024-07-24 19:28:25.729308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.499 [2024-07-24 19:28:25.729320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.499 [2024-07-24 19:28:25.729329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.499 [2024-07-24 19:28:25.731929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.760 [2024-07-24 19:28:25.741426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.760 [2024-07-24 19:28:25.741888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.760 [2024-07-24 19:28:25.741908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.760 [2024-07-24 19:28:25.741918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.760 [2024-07-24 19:28:25.742089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.760 [2024-07-24 19:28:25.742259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.760 [2024-07-24 19:28:25.742271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.760 [2024-07-24 19:28:25.742280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.760 [2024-07-24 19:28:25.744917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.760 [2024-07-24 19:28:25.754247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.760 [2024-07-24 19:28:25.754738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.760 [2024-07-24 19:28:25.754789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.760 [2024-07-24 19:28:25.754823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.760 [2024-07-24 19:28:25.755342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.760 [2024-07-24 19:28:25.755509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.760 [2024-07-24 19:28:25.755520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.760 [2024-07-24 19:28:25.755530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.760 [2024-07-24 19:28:25.758023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.760 [2024-07-24 19:28:25.767171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.760 [2024-07-24 19:28:25.767620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.760 [2024-07-24 19:28:25.767672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.760 [2024-07-24 19:28:25.767704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.760 [2024-07-24 19:28:25.768308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.760 [2024-07-24 19:28:25.768568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.760 [2024-07-24 19:28:25.768579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.760 [2024-07-24 19:28:25.768595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.760 [2024-07-24 19:28:25.771250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.760 [2024-07-24 19:28:25.779998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.760 [2024-07-24 19:28:25.780525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.760 [2024-07-24 19:28:25.780576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.760 [2024-07-24 19:28:25.780609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.760 [2024-07-24 19:28:25.781141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.760 [2024-07-24 19:28:25.781308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.760 [2024-07-24 19:28:25.781319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.760 [2024-07-24 19:28:25.781329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.760 [2024-07-24 19:28:25.783929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.760 [2024-07-24 19:28:25.792822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.760 [2024-07-24 19:28:25.793274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.760 [2024-07-24 19:28:25.793293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.760 [2024-07-24 19:28:25.793302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.760 [2024-07-24 19:28:25.793468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.760 [2024-07-24 19:28:25.793633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.760 [2024-07-24 19:28:25.793645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.760 [2024-07-24 19:28:25.793654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.760 [2024-07-24 19:28:25.796190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.760 [2024-07-24 19:28:25.805568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.760 [2024-07-24 19:28:25.806052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.760 [2024-07-24 19:28:25.806070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.760 [2024-07-24 19:28:25.806079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.760 [2024-07-24 19:28:25.806235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.760 [2024-07-24 19:28:25.806392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.760 [2024-07-24 19:28:25.806403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.760 [2024-07-24 19:28:25.806412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.760 [2024-07-24 19:28:25.808898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.760 [2024-07-24 19:28:25.818435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.760 [2024-07-24 19:28:25.818884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.760 [2024-07-24 19:28:25.818904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.760 [2024-07-24 19:28:25.818914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.760 [2024-07-24 19:28:25.819071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.760 [2024-07-24 19:28:25.819229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.760 [2024-07-24 19:28:25.819239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.760 [2024-07-24 19:28:25.819248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.760 [2024-07-24 19:28:25.821779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.761 [2024-07-24 19:28:25.831250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.761 [2024-07-24 19:28:25.831748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.761 [2024-07-24 19:28:25.831801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.761 [2024-07-24 19:28:25.831833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.761 [2024-07-24 19:28:25.832421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.761 [2024-07-24 19:28:25.832669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.761 [2024-07-24 19:28:25.832680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.761 [2024-07-24 19:28:25.832690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.761 [2024-07-24 19:28:25.835236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.761 [2024-07-24 19:28:25.844076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.761 [2024-07-24 19:28:25.844580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.761 [2024-07-24 19:28:25.844631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.761 [2024-07-24 19:28:25.844664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.761 [2024-07-24 19:28:25.845274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.761 [2024-07-24 19:28:25.845466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.761 [2024-07-24 19:28:25.845478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.761 [2024-07-24 19:28:25.845486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.761 [2024-07-24 19:28:25.847974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.761 [2024-07-24 19:28:25.856766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.761 [2024-07-24 19:28:25.857256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.761 [2024-07-24 19:28:25.857273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.761 [2024-07-24 19:28:25.857282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.761 [2024-07-24 19:28:25.857439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.761 [2024-07-24 19:28:25.857598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.761 [2024-07-24 19:28:25.857608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.761 [2024-07-24 19:28:25.857617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.761 [2024-07-24 19:28:25.860164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.761 [2024-07-24 19:28:25.869441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.761 [2024-07-24 19:28:25.869933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.761 [2024-07-24 19:28:25.869951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.761 [2024-07-24 19:28:25.869960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.761 [2024-07-24 19:28:25.870116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.761 [2024-07-24 19:28:25.870273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.761 [2024-07-24 19:28:25.870283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.761 [2024-07-24 19:28:25.870291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.761 [2024-07-24 19:28:25.872847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.761 [2024-07-24 19:28:25.882088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.761 [2024-07-24 19:28:25.882560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.761 [2024-07-24 19:28:25.882578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.761 [2024-07-24 19:28:25.882588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.761 [2024-07-24 19:28:25.882767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.761 [2024-07-24 19:28:25.882934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.761 [2024-07-24 19:28:25.882945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.761 [2024-07-24 19:28:25.882954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.761 [2024-07-24 19:28:25.885470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.761 [2024-07-24 19:28:25.894768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.761 [2024-07-24 19:28:25.895261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.761 [2024-07-24 19:28:25.895278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.761 [2024-07-24 19:28:25.895287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.761 [2024-07-24 19:28:25.895443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.761 [2024-07-24 19:28:25.895600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.761 [2024-07-24 19:28:25.895611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.761 [2024-07-24 19:28:25.895620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.761 [2024-07-24 19:28:25.898165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.761 [2024-07-24 19:28:25.907536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.761 [2024-07-24 19:28:25.908044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.761 [2024-07-24 19:28:25.908097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.761 [2024-07-24 19:28:25.908129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.761 [2024-07-24 19:28:25.908560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.761 [2024-07-24 19:28:25.908724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.761 [2024-07-24 19:28:25.908735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.761 [2024-07-24 19:28:25.908761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.761 [2024-07-24 19:28:25.911284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.761 [2024-07-24 19:28:25.920303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.761 [2024-07-24 19:28:25.920784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.761 [2024-07-24 19:28:25.920837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.761 [2024-07-24 19:28:25.920868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.761 [2024-07-24 19:28:25.921458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.761 [2024-07-24 19:28:25.921968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.761 [2024-07-24 19:28:25.921980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.761 [2024-07-24 19:28:25.921989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.761 [2024-07-24 19:28:25.924503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.761 [2024-07-24 19:28:25.933089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.761 [2024-07-24 19:28:25.933590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.761 [2024-07-24 19:28:25.933609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.761 [2024-07-24 19:28:25.933619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.761 [2024-07-24 19:28:25.933802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.761 [2024-07-24 19:28:25.933969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.761 [2024-07-24 19:28:25.933979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.761 [2024-07-24 19:28:25.933988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.761 [2024-07-24 19:28:25.936499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.761 [2024-07-24 19:28:25.945802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.761 [2024-07-24 19:28:25.946294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.761 [2024-07-24 19:28:25.946311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.761 [2024-07-24 19:28:25.946323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.762 [2024-07-24 19:28:25.946479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.762 [2024-07-24 19:28:25.946636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.762 [2024-07-24 19:28:25.946645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.762 [2024-07-24 19:28:25.946654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.762 [2024-07-24 19:28:25.949208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.762 [2024-07-24 19:28:25.958490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.762 [2024-07-24 19:28:25.958982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.762 [2024-07-24 19:28:25.959000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.762 [2024-07-24 19:28:25.959009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.762 [2024-07-24 19:28:25.959165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.762 [2024-07-24 19:28:25.959321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.762 [2024-07-24 19:28:25.959331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.762 [2024-07-24 19:28:25.959339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.762 [2024-07-24 19:28:25.961887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.762 [2024-07-24 19:28:25.971252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.762 [2024-07-24 19:28:25.971737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.762 [2024-07-24 19:28:25.971755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.762 [2024-07-24 19:28:25.971763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.762 [2024-07-24 19:28:25.971919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.762 [2024-07-24 19:28:25.972076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.762 [2024-07-24 19:28:25.972086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.762 [2024-07-24 19:28:25.972094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.762 [2024-07-24 19:28:25.974645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.762 [2024-07-24 19:28:25.984012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.762 [2024-07-24 19:28:25.984484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.762 [2024-07-24 19:28:25.984535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.762 [2024-07-24 19:28:25.984567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:39.762 [2024-07-24 19:28:25.985086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:39.762 [2024-07-24 19:28:25.985253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.762 [2024-07-24 19:28:25.985268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.762 [2024-07-24 19:28:25.985277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.762 [2024-07-24 19:28:25.987776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.762 [2024-07-24 19:28:25.996830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.762 [2024-07-24 19:28:25.997308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.762 [2024-07-24 19:28:25.997354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:39.762 [2024-07-24 19:28:25.997387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.023 [2024-07-24 19:28:25.997915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.023 [2024-07-24 19:28:25.998082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.023 [2024-07-24 19:28:25.998095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.023 [2024-07-24 19:28:25.998105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.023 [2024-07-24 19:28:26.000669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.023 [2024-07-24 19:28:26.009689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.023 [2024-07-24 19:28:26.010200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.023 [2024-07-24 19:28:26.010219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.023 [2024-07-24 19:28:26.010229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.023 [2024-07-24 19:28:26.010394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.023 [2024-07-24 19:28:26.010559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.023 [2024-07-24 19:28:26.010570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.023 [2024-07-24 19:28:26.010579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.023 [2024-07-24 19:28:26.013254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.023 [2024-07-24 19:28:26.022513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.023 [2024-07-24 19:28:26.022952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.023 [2024-07-24 19:28:26.022971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.023 [2024-07-24 19:28:26.022981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.023 [2024-07-24 19:28:26.023146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.023 [2024-07-24 19:28:26.023312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.023 [2024-07-24 19:28:26.023323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.023 [2024-07-24 19:28:26.023332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.023 [2024-07-24 19:28:26.025930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.023 [2024-07-24 19:28:26.035422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.023 [2024-07-24 19:28:26.035888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.023 [2024-07-24 19:28:26.035906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.023 [2024-07-24 19:28:26.035916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.023 [2024-07-24 19:28:26.036082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.023 [2024-07-24 19:28:26.036247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.023 [2024-07-24 19:28:26.036259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.023 [2024-07-24 19:28:26.036269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.023 [2024-07-24 19:28:26.038868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.023 [2024-07-24 19:28:26.048342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.024 [2024-07-24 19:28:26.048839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.024 [2024-07-24 19:28:26.048892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.024 [2024-07-24 19:28:26.048924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.024 [2024-07-24 19:28:26.049516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.024 [2024-07-24 19:28:26.050128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.024 [2024-07-24 19:28:26.050143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.024 [2024-07-24 19:28:26.050156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.024 [2024-07-24 19:28:26.053891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.024 [2024-07-24 19:28:26.061709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.024 [2024-07-24 19:28:26.062150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.024 [2024-07-24 19:28:26.062209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.024 [2024-07-24 19:28:26.062242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.024 [2024-07-24 19:28:26.062847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.024 [2024-07-24 19:28:26.063347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.024 [2024-07-24 19:28:26.063357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.024 [2024-07-24 19:28:26.063367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.024 [2024-07-24 19:28:26.065848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.024 [2024-07-24 19:28:26.074500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.024 [2024-07-24 19:28:26.074998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.024 [2024-07-24 19:28:26.075051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.024 [2024-07-24 19:28:26.075084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.024 [2024-07-24 19:28:26.075449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.024 [2024-07-24 19:28:26.075607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.024 [2024-07-24 19:28:26.075617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.024 [2024-07-24 19:28:26.075626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.024 [2024-07-24 19:28:26.078171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.024 [2024-07-24 19:28:26.087250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.024 [2024-07-24 19:28:26.087667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.024 [2024-07-24 19:28:26.087731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.024 [2024-07-24 19:28:26.087765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.024 [2024-07-24 19:28:26.088216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.024 [2024-07-24 19:28:26.088374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.024 [2024-07-24 19:28:26.088385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.024 [2024-07-24 19:28:26.088393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.024 [2024-07-24 19:28:26.090937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.024 [2024-07-24 19:28:26.099977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.024 [2024-07-24 19:28:26.100465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.024 [2024-07-24 19:28:26.100483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.024 [2024-07-24 19:28:26.100492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.024 [2024-07-24 19:28:26.100648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.024 [2024-07-24 19:28:26.100830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.024 [2024-07-24 19:28:26.100842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.024 [2024-07-24 19:28:26.100851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.024 [2024-07-24 19:28:26.103373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.024 [2024-07-24 19:28:26.112742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.024 [2024-07-24 19:28:26.113206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.024 [2024-07-24 19:28:26.113224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.024 [2024-07-24 19:28:26.113233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.024 [2024-07-24 19:28:26.113389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.024 [2024-07-24 19:28:26.113546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.024 [2024-07-24 19:28:26.113557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.024 [2024-07-24 19:28:26.113568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.024 [2024-07-24 19:28:26.116116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.024 [2024-07-24 19:28:26.125482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.024 [2024-07-24 19:28:26.125958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.024 [2024-07-24 19:28:26.126011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.024 [2024-07-24 19:28:26.126043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.024 [2024-07-24 19:28:26.126489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.024 [2024-07-24 19:28:26.126647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.024 [2024-07-24 19:28:26.126657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.024 [2024-07-24 19:28:26.126667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.024 [2024-07-24 19:28:26.129215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.024 [2024-07-24 19:28:26.138233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.024 [2024-07-24 19:28:26.138727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.024 [2024-07-24 19:28:26.138744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.024 [2024-07-24 19:28:26.138754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.024 [2024-07-24 19:28:26.138910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.024 [2024-07-24 19:28:26.139067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.024 [2024-07-24 19:28:26.139077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.024 [2024-07-24 19:28:26.139085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.024 [2024-07-24 19:28:26.141631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.024 [2024-07-24 19:28:26.151008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.024 [2024-07-24 19:28:26.151427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.024 [2024-07-24 19:28:26.151469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.024 [2024-07-24 19:28:26.151502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.024 [2024-07-24 19:28:26.152107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.024 [2024-07-24 19:28:26.152596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.024 [2024-07-24 19:28:26.152608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.024 [2024-07-24 19:28:26.152617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.024 [2024-07-24 19:28:26.155098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.024 [2024-07-24 19:28:26.163739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.024 [2024-07-24 19:28:26.164233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.024 [2024-07-24 19:28:26.164292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.024 [2024-07-24 19:28:26.164325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.025 [2024-07-24 19:28:26.164931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.025 [2024-07-24 19:28:26.165385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.025 [2024-07-24 19:28:26.165396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.025 [2024-07-24 19:28:26.165405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.025 [2024-07-24 19:28:26.167864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.025 [2024-07-24 19:28:26.176424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.025 [2024-07-24 19:28:26.176926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.025 [2024-07-24 19:28:26.176978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.025 [2024-07-24 19:28:26.177011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.025 [2024-07-24 19:28:26.177480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.025 [2024-07-24 19:28:26.177638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.025 [2024-07-24 19:28:26.177647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.025 [2024-07-24 19:28:26.177656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.025 [2024-07-24 19:28:26.180202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.025 [2024-07-24 19:28:26.189130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.025 [2024-07-24 19:28:26.189616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.025 [2024-07-24 19:28:26.189654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.025 [2024-07-24 19:28:26.189688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.025 [2024-07-24 19:28:26.190270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.025 [2024-07-24 19:28:26.190437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.025 [2024-07-24 19:28:26.190449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.025 [2024-07-24 19:28:26.190458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.025 [2024-07-24 19:28:26.192948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.025 [2024-07-24 19:28:26.201792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.025 [2024-07-24 19:28:26.202257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.025 [2024-07-24 19:28:26.202274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.025 [2024-07-24 19:28:26.202284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.025 [2024-07-24 19:28:26.202440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.025 [2024-07-24 19:28:26.202600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.025 [2024-07-24 19:28:26.202610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.025 [2024-07-24 19:28:26.202619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1681444 Killed "${NVMF_APP[@]}" "$@" 00:27:40.025 [2024-07-24 19:28:26.205228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1682834 00:27:40.025 [2024-07-24 19:28:26.214812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1682834 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:40.025 [2024-07-24 19:28:26.215312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.025 [2024-07-24 19:28:26.215331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.025 [2024-07-24 19:28:26.215341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.025 [2024-07-24 19:28:26.215510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1682834 ']' 00:27:40.025 [2024-07-24 19:28:26.215681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.025 [2024-07-24 19:28:26.215701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.025 [2024-07-24 19:28:26.215710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:40.025 19:28:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:40.025 [2024-07-24 19:28:26.218385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.025 [2024-07-24 19:28:26.227688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.025 [2024-07-24 19:28:26.228198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.025 [2024-07-24 19:28:26.228216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.025 [2024-07-24 19:28:26.228226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.025 [2024-07-24 19:28:26.228397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.025 [2024-07-24 19:28:26.228570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.025 [2024-07-24 19:28:26.228582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.025 [2024-07-24 19:28:26.228591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.025 [2024-07-24 19:28:26.231268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.025 [2024-07-24 19:28:26.240572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.025 [2024-07-24 19:28:26.241064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.025 [2024-07-24 19:28:26.241083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.025 [2024-07-24 19:28:26.241093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.025 [2024-07-24 19:28:26.241263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.025 [2024-07-24 19:28:26.241433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.025 [2024-07-24 19:28:26.241445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.025 [2024-07-24 19:28:26.241454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.025 [2024-07-24 19:28:26.244130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.025 [2024-07-24 19:28:26.253368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.025 [2024-07-24 19:28:26.253868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.025 [2024-07-24 19:28:26.253886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.025 [2024-07-24 19:28:26.253896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.025 [2024-07-24 19:28:26.254063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.025 [2024-07-24 19:28:26.254228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.025 [2024-07-24 19:28:26.254239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.025 [2024-07-24 19:28:26.254248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.025 [2024-07-24 19:28:26.256848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.287 [2024-07-24 19:28:26.265501] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:40.287 [2024-07-24 19:28:26.265545] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.287 [2024-07-24 19:28:26.266300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.287 [2024-07-24 19:28:26.266804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.287 [2024-07-24 19:28:26.266824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.287 [2024-07-24 19:28:26.266834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.287 [2024-07-24 19:28:26.267006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.287 [2024-07-24 19:28:26.267180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.287 [2024-07-24 19:28:26.267192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.287 [2024-07-24 19:28:26.267203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.287 [2024-07-24 19:28:26.269876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.287 [2024-07-24 19:28:26.279150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.287 [2024-07-24 19:28:26.279629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.287 [2024-07-24 19:28:26.279648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.287 [2024-07-24 19:28:26.279658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.287 [2024-07-24 19:28:26.279835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.287 [2024-07-24 19:28:26.280012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.287 [2024-07-24 19:28:26.280023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.287 [2024-07-24 19:28:26.280032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.287 [2024-07-24 19:28:26.282627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.287 [2024-07-24 19:28:26.292206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.287 [2024-07-24 19:28:26.292707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.287 [2024-07-24 19:28:26.292730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.287 [2024-07-24 19:28:26.292741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.287 [2024-07-24 19:28:26.292907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.287 [2024-07-24 19:28:26.293072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.287 [2024-07-24 19:28:26.293083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.287 [2024-07-24 19:28:26.293092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.287 [2024-07-24 19:28:26.295686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.287 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.287 [2024-07-24 19:28:26.305088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.287 [2024-07-24 19:28:26.305578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.287 [2024-07-24 19:28:26.305597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.287 [2024-07-24 19:28:26.305607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.287 [2024-07-24 19:28:26.305790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.287 [2024-07-24 19:28:26.305956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.287 [2024-07-24 19:28:26.305968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.287 [2024-07-24 19:28:26.305977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.287 [2024-07-24 19:28:26.308631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.287 [2024-07-24 19:28:26.318033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.287 [2024-07-24 19:28:26.318533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.287 [2024-07-24 19:28:26.318551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.287 [2024-07-24 19:28:26.318561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.287 [2024-07-24 19:28:26.318731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.287 [2024-07-24 19:28:26.318898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.287 [2024-07-24 19:28:26.318909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.287 [2024-07-24 19:28:26.318918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.287 [2024-07-24 19:28:26.321508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.287 [2024-07-24 19:28:26.330854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.287 [2024-07-24 19:28:26.331356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.287 [2024-07-24 19:28:26.331374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.287 [2024-07-24 19:28:26.331384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.287 [2024-07-24 19:28:26.331550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.287 [2024-07-24 19:28:26.331721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.287 [2024-07-24 19:28:26.331734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.287 [2024-07-24 19:28:26.331743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.287 [2024-07-24 19:28:26.334339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.287 [2024-07-24 19:28:26.340436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:40.287 [2024-07-24 19:28:26.343682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.287 [2024-07-24 19:28:26.344164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.287 [2024-07-24 19:28:26.344182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.287 [2024-07-24 19:28:26.344192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.287 [2024-07-24 19:28:26.344358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.287 [2024-07-24 19:28:26.344524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.287 [2024-07-24 19:28:26.344536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.287 [2024-07-24 19:28:26.344546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.287 [2024-07-24 19:28:26.347145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.287 [2024-07-24 19:28:26.356493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.287 [2024-07-24 19:28:26.356997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.287 [2024-07-24 19:28:26.357019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.287 [2024-07-24 19:28:26.357030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.287 [2024-07-24 19:28:26.357196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.287 [2024-07-24 19:28:26.357363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.287 [2024-07-24 19:28:26.357374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.287 [2024-07-24 19:28:26.357385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.287 [2024-07-24 19:28:26.359987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.287 [2024-07-24 19:28:26.369326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.287 [2024-07-24 19:28:26.369822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.288 [2024-07-24 19:28:26.369841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.288 [2024-07-24 19:28:26.369851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.288 [2024-07-24 19:28:26.370017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.288 [2024-07-24 19:28:26.370182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.288 [2024-07-24 19:28:26.370193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.288 [2024-07-24 19:28:26.370202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.288 [2024-07-24 19:28:26.372804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.288 [2024-07-24 19:28:26.382277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.288 [2024-07-24 19:28:26.382817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.288 [2024-07-24 19:28:26.382842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.288 [2024-07-24 19:28:26.382853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.288 [2024-07-24 19:28:26.383021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.288 [2024-07-24 19:28:26.383189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.288 [2024-07-24 19:28:26.383201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.288 [2024-07-24 19:28:26.383211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.288 [2024-07-24 19:28:26.385867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.288 [2024-07-24 19:28:26.395233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.288 [2024-07-24 19:28:26.395735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.288 [2024-07-24 19:28:26.395771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.288 [2024-07-24 19:28:26.395781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.288 [2024-07-24 19:28:26.395952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.288 [2024-07-24 19:28:26.396131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.288 [2024-07-24 19:28:26.396143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.288 [2024-07-24 19:28:26.396153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.288 [2024-07-24 19:28:26.398782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.288 [2024-07-24 19:28:26.408105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.288 [2024-07-24 19:28:26.408586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.288 [2024-07-24 19:28:26.408606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.288 [2024-07-24 19:28:26.408617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.288 [2024-07-24 19:28:26.408795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.288 [2024-07-24 19:28:26.408974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.288 [2024-07-24 19:28:26.408986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.288 [2024-07-24 19:28:26.408995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.288 [2024-07-24 19:28:26.409327] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.288 [2024-07-24 19:28:26.409359] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.288 [2024-07-24 19:28:26.409369] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.288 [2024-07-24 19:28:26.409379] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.288 [2024-07-24 19:28:26.409387] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.288 [2024-07-24 19:28:26.409430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.288 [2024-07-24 19:28:26.409517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.288 [2024-07-24 19:28:26.409519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.288 [2024-07-24 19:28:26.411669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.288 [2024-07-24 19:28:26.421131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.288 [2024-07-24 19:28:26.421654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.288 [2024-07-24 19:28:26.421675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.288 [2024-07-24 19:28:26.421687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.288 [2024-07-24 19:28:26.421865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.288 [2024-07-24 19:28:26.422037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.288 [2024-07-24 19:28:26.422049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.288 [2024-07-24 19:28:26.422059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.288 [2024-07-24 19:28:26.424731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.288 [2024-07-24 19:28:26.434030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.288 [2024-07-24 19:28:26.434544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.288 [2024-07-24 19:28:26.434567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.288 [2024-07-24 19:28:26.434583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.288 [2024-07-24 19:28:26.434760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.288 [2024-07-24 19:28:26.434933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.288 [2024-07-24 19:28:26.434944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.288 [2024-07-24 19:28:26.434955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.288 [2024-07-24 19:28:26.437622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.288 [2024-07-24 19:28:26.446922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.288 [2024-07-24 19:28:26.447453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.288 [2024-07-24 19:28:26.447474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.288 [2024-07-24 19:28:26.447485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.288 [2024-07-24 19:28:26.447657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.288 [2024-07-24 19:28:26.447834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.288 [2024-07-24 19:28:26.447846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.288 [2024-07-24 19:28:26.447856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.288 [2024-07-24 19:28:26.450551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.288 [2024-07-24 19:28:26.459857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.288 [2024-07-24 19:28:26.460306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.288 [2024-07-24 19:28:26.460329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.288 [2024-07-24 19:28:26.460341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.288 [2024-07-24 19:28:26.460512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.288 [2024-07-24 19:28:26.460686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.288 [2024-07-24 19:28:26.460699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.288 [2024-07-24 19:28:26.460708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.288 [2024-07-24 19:28:26.463382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.288 [2024-07-24 19:28:26.472831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.288 [2024-07-24 19:28:26.473321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.288 [2024-07-24 19:28:26.473340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.288 [2024-07-24 19:28:26.473350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.288 [2024-07-24 19:28:26.473519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.288 [2024-07-24 19:28:26.473690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.288 [2024-07-24 19:28:26.473704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.288 [2024-07-24 19:28:26.473719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.289 [2024-07-24 19:28:26.476397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.289 [2024-07-24 19:28:26.485850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.289 [2024-07-24 19:28:26.486357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.289 [2024-07-24 19:28:26.486376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.289 [2024-07-24 19:28:26.486387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.289 [2024-07-24 19:28:26.486557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.289 [2024-07-24 19:28:26.486735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.289 [2024-07-24 19:28:26.486747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.289 [2024-07-24 19:28:26.486757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.289 [2024-07-24 19:28:26.489424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.289 [2024-07-24 19:28:26.499099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.289 [2024-07-24 19:28:26.499596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.289 [2024-07-24 19:28:26.499616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.289 [2024-07-24 19:28:26.499626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.289 [2024-07-24 19:28:26.499802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.289 [2024-07-24 19:28:26.499973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.289 [2024-07-24 19:28:26.499984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.289 [2024-07-24 19:28:26.499993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.289 [2024-07-24 19:28:26.502660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.289 [2024-07-24 19:28:26.512138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.289 [2024-07-24 19:28:26.512648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.289 [2024-07-24 19:28:26.512667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.289 [2024-07-24 19:28:26.512677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.289 [2024-07-24 19:28:26.512851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.289 [2024-07-24 19:28:26.513022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.289 [2024-07-24 19:28:26.513032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.289 [2024-07-24 19:28:26.513041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.289 [2024-07-24 19:28:26.515708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.549 [2024-07-24 19:28:26.525165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.549 [2024-07-24 19:28:26.525658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.550 [2024-07-24 19:28:26.525677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.550 [2024-07-24 19:28:26.525687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.550 [2024-07-24 19:28:26.525864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.550 [2024-07-24 19:28:26.526035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.550 [2024-07-24 19:28:26.526046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.550 [2024-07-24 19:28:26.526055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.550 [2024-07-24 19:28:26.528728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.550 [2024-07-24 19:28:26.538169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.550 [2024-07-24 19:28:26.538671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.550 [2024-07-24 19:28:26.538689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.550 [2024-07-24 19:28:26.538699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.550 [2024-07-24 19:28:26.538874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.550 [2024-07-24 19:28:26.539044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.550 [2024-07-24 19:28:26.539055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.550 [2024-07-24 19:28:26.539065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.550 [2024-07-24 19:28:26.541736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.550 [2024-07-24 19:28:26.551187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.550 [2024-07-24 19:28:26.551690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.550 [2024-07-24 19:28:26.551708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.550 [2024-07-24 19:28:26.551722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.550 [2024-07-24 19:28:26.551892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.550 [2024-07-24 19:28:26.552063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.550 [2024-07-24 19:28:26.552082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.550 [2024-07-24 19:28:26.552092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.550 [2024-07-24 19:28:26.554759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.550 [2024-07-24 19:28:26.564204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.550 [2024-07-24 19:28:26.564711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.550 [2024-07-24 19:28:26.564733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.550 [2024-07-24 19:28:26.564743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.550 [2024-07-24 19:28:26.564928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.550 [2024-07-24 19:28:26.565098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.550 [2024-07-24 19:28:26.565110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.550 [2024-07-24 19:28:26.565119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.550 [2024-07-24 19:28:26.567787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.550 [2024-07-24 19:28:26.577079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.550 [2024-07-24 19:28:26.577564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.550 [2024-07-24 19:28:26.577582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.550 [2024-07-24 19:28:26.577592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.550 [2024-07-24 19:28:26.577766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.550 [2024-07-24 19:28:26.577937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.550 [2024-07-24 19:28:26.577947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.550 [2024-07-24 19:28:26.577956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.550 [2024-07-24 19:28:26.580623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.550 [2024-07-24 19:28:26.590069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.550 [2024-07-24 19:28:26.590576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.550 [2024-07-24 19:28:26.590594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.550 [2024-07-24 19:28:26.590603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.550 [2024-07-24 19:28:26.590777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.550 [2024-07-24 19:28:26.590948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.550 [2024-07-24 19:28:26.590959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.550 [2024-07-24 19:28:26.590968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.550 [2024-07-24 19:28:26.593634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.550 [2024-07-24 19:28:26.603077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.550 [2024-07-24 19:28:26.603556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.550 [2024-07-24 19:28:26.603575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.550 [2024-07-24 19:28:26.603585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.550 [2024-07-24 19:28:26.603760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.550 [2024-07-24 19:28:26.603931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.550 [2024-07-24 19:28:26.603942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.550 [2024-07-24 19:28:26.603955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.550 [2024-07-24 19:28:26.606620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.550 [2024-07-24 19:28:26.616063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.550 [2024-07-24 19:28:26.616524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.550 [2024-07-24 19:28:26.616542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.550 [2024-07-24 19:28:26.616552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.550 [2024-07-24 19:28:26.616727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.550 [2024-07-24 19:28:26.616898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.550 [2024-07-24 19:28:26.616910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.550 [2024-07-24 19:28:26.616919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.550 [2024-07-24 19:28:26.619582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.550 [2024-07-24 19:28:26.629025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.550 [2024-07-24 19:28:26.629531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.550 [2024-07-24 19:28:26.629548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.550 [2024-07-24 19:28:26.629559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.550 [2024-07-24 19:28:26.629733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.550 [2024-07-24 19:28:26.629903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.550 [2024-07-24 19:28:26.629914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.550 [2024-07-24 19:28:26.629924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.550 [2024-07-24 19:28:26.632589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.550 [2024-07-24 19:28:26.642033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.550 [2024-07-24 19:28:26.642514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.551 [2024-07-24 19:28:26.642532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.551 [2024-07-24 19:28:26.642543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.551 [2024-07-24 19:28:26.642712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.551 [2024-07-24 19:28:26.642887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.551 [2024-07-24 19:28:26.642899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.551 [2024-07-24 19:28:26.642910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.551 [2024-07-24 19:28:26.645579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.551 [2024-07-24 19:28:26.655034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.551 [2024-07-24 19:28:26.655478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.551 [2024-07-24 19:28:26.655495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.551 [2024-07-24 19:28:26.655506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.551 [2024-07-24 19:28:26.655675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.551 [2024-07-24 19:28:26.655851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.551 [2024-07-24 19:28:26.655863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.551 [2024-07-24 19:28:26.655872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.551 [2024-07-24 19:28:26.658537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.551 [2024-07-24 19:28:26.667983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.551 [2024-07-24 19:28:26.668417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.551 [2024-07-24 19:28:26.668436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.551 [2024-07-24 19:28:26.668446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.551 [2024-07-24 19:28:26.668616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.551 [2024-07-24 19:28:26.668791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.551 [2024-07-24 19:28:26.668803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.551 [2024-07-24 19:28:26.668812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.551 [2024-07-24 19:28:26.671476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.551 [2024-07-24 19:28:26.680933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.551 [2024-07-24 19:28:26.681414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.551 [2024-07-24 19:28:26.681433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.551 [2024-07-24 19:28:26.681443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.551 [2024-07-24 19:28:26.681612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.551 [2024-07-24 19:28:26.681789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.551 [2024-07-24 19:28:26.681801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.551 [2024-07-24 19:28:26.681811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.551 [2024-07-24 19:28:26.684475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.551 [2024-07-24 19:28:26.693926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.551 [2024-07-24 19:28:26.694330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.551 [2024-07-24 19:28:26.694348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.551 [2024-07-24 19:28:26.694358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.551 [2024-07-24 19:28:26.694531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.551 [2024-07-24 19:28:26.694701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.551 [2024-07-24 19:28:26.694712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.551 [2024-07-24 19:28:26.694730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.551 [2024-07-24 19:28:26.697395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.551 [2024-07-24 19:28:26.706844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.551 [2024-07-24 19:28:26.707343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.551 [2024-07-24 19:28:26.707361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.551 [2024-07-24 19:28:26.707371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.551 [2024-07-24 19:28:26.707541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.551 [2024-07-24 19:28:26.707712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.551 [2024-07-24 19:28:26.707729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.551 [2024-07-24 19:28:26.707738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.551 [2024-07-24 19:28:26.710405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.551 [2024-07-24 19:28:26.719877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.551 [2024-07-24 19:28:26.720385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.551 [2024-07-24 19:28:26.720404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.551 [2024-07-24 19:28:26.720414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.551 [2024-07-24 19:28:26.720585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.551 [2024-07-24 19:28:26.720759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.551 [2024-07-24 19:28:26.720771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.551 [2024-07-24 19:28:26.720781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.551 [2024-07-24 19:28:26.723446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.551 [2024-07-24 19:28:26.732896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.551 [2024-07-24 19:28:26.733401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.551 [2024-07-24 19:28:26.733419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.551 [2024-07-24 19:28:26.733429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.551 [2024-07-24 19:28:26.733599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.551 [2024-07-24 19:28:26.733774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.551 [2024-07-24 19:28:26.733786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.551 [2024-07-24 19:28:26.733795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.551 [2024-07-24 19:28:26.736466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.551 [2024-07-24 19:28:26.745916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.551 [2024-07-24 19:28:26.746419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.551 [2024-07-24 19:28:26.746438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.551 [2024-07-24 19:28:26.746448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.551 [2024-07-24 19:28:26.746618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.551 [2024-07-24 19:28:26.746793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.551 [2024-07-24 19:28:26.746804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.552 [2024-07-24 19:28:26.746814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.552 [2024-07-24 19:28:26.749479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.552 [2024-07-24 19:28:26.758936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.552 [2024-07-24 19:28:26.759380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.552 [2024-07-24 19:28:26.759398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.552 [2024-07-24 19:28:26.759409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.552 [2024-07-24 19:28:26.759579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.552 [2024-07-24 19:28:26.759755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.552 [2024-07-24 19:28:26.759766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.552 [2024-07-24 19:28:26.759776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.552 [2024-07-24 19:28:26.762441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.552 [2024-07-24 19:28:26.771888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.552 [2024-07-24 19:28:26.772372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.552 [2024-07-24 19:28:26.772391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.552 [2024-07-24 19:28:26.772401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.552 [2024-07-24 19:28:26.772572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.552 [2024-07-24 19:28:26.772745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.552 [2024-07-24 19:28:26.772757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.552 [2024-07-24 19:28:26.772766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.552 [2024-07-24 19:28:26.775443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.552 [2024-07-24 19:28:26.784896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.552 [2024-07-24 19:28:26.785400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.552 [2024-07-24 19:28:26.785419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.552 [2024-07-24 19:28:26.785431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.552 [2024-07-24 19:28:26.785602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.552 [2024-07-24 19:28:26.785776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.552 [2024-07-24 19:28:26.785788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.552 [2024-07-24 19:28:26.785799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.812 [2024-07-24 19:28:26.788465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.812 [2024-07-24 19:28:26.797913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.812 [2024-07-24 19:28:26.798419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.812 [2024-07-24 19:28:26.798437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.812 [2024-07-24 19:28:26.798447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.812 [2024-07-24 19:28:26.798618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.812 [2024-07-24 19:28:26.798793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.812 [2024-07-24 19:28:26.798805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.812 [2024-07-24 19:28:26.798815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.812 [2024-07-24 19:28:26.801481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.812 [2024-07-24 19:28:26.810934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.812 [2024-07-24 19:28:26.811416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.812 [2024-07-24 19:28:26.811435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.812 [2024-07-24 19:28:26.811446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.812 [2024-07-24 19:28:26.811617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.812 [2024-07-24 19:28:26.811792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.812 [2024-07-24 19:28:26.811804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.812 [2024-07-24 19:28:26.811813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.812 [2024-07-24 19:28:26.814481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.812 [2024-07-24 19:28:26.823933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.812 [2024-07-24 19:28:26.824434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.812 [2024-07-24 19:28:26.824452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.812 [2024-07-24 19:28:26.824462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.812 [2024-07-24 19:28:26.824632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.812 [2024-07-24 19:28:26.824807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.812 [2024-07-24 19:28:26.824821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.812 [2024-07-24 19:28:26.824830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.812 [2024-07-24 19:28:26.827496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.812 [2024-07-24 19:28:26.836949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.812 [2024-07-24 19:28:26.837434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.812 [2024-07-24 19:28:26.837454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.812 [2024-07-24 19:28:26.837464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.812 [2024-07-24 19:28:26.837634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.812 [2024-07-24 19:28:26.837811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.812 [2024-07-24 19:28:26.837824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.812 [2024-07-24 19:28:26.837833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.812 [2024-07-24 19:28:26.840499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.812 [2024-07-24 19:28:26.849959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.812 [2024-07-24 19:28:26.850461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.812 [2024-07-24 19:28:26.850480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.812 [2024-07-24 19:28:26.850490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.812 [2024-07-24 19:28:26.850661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.812 [2024-07-24 19:28:26.850838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.812 [2024-07-24 19:28:26.850850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.812 [2024-07-24 19:28:26.850859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.812 [2024-07-24 19:28:26.853526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.812 [2024-07-24 19:28:26.862977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.812 [2024-07-24 19:28:26.863459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.812 [2024-07-24 19:28:26.863478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.812 [2024-07-24 19:28:26.863488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.812 [2024-07-24 19:28:26.863658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.812 [2024-07-24 19:28:26.863832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.812 [2024-07-24 19:28:26.863844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.812 [2024-07-24 19:28:26.863853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.812 [2024-07-24 19:28:26.866516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.812 [2024-07-24 19:28:26.875979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.812 [2024-07-24 19:28:26.876396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.813 [2024-07-24 19:28:26.876415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.813 [2024-07-24 19:28:26.876425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.813 [2024-07-24 19:28:26.876594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.813 [2024-07-24 19:28:26.876769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.813 [2024-07-24 19:28:26.876782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.813 [2024-07-24 19:28:26.876792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.813 [2024-07-24 19:28:26.879461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.813 [2024-07-24 19:28:26.888933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.813 [2024-07-24 19:28:26.889228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.813 [2024-07-24 19:28:26.889246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.813 [2024-07-24 19:28:26.889256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.813 [2024-07-24 19:28:26.889426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.813 [2024-07-24 19:28:26.889595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.813 [2024-07-24 19:28:26.889606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.813 [2024-07-24 19:28:26.889616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.813 [2024-07-24 19:28:26.892291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.813 [2024-07-24 19:28:26.902075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.813 [2024-07-24 19:28:26.902506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.813 [2024-07-24 19:28:26.902524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.813 [2024-07-24 19:28:26.902534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.813 [2024-07-24 19:28:26.902705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.813 [2024-07-24 19:28:26.902881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.813 [2024-07-24 19:28:26.902893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.813 [2024-07-24 19:28:26.902902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.813 [2024-07-24 19:28:26.905572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.813 [2024-07-24 19:28:26.915034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.813 [2024-07-24 19:28:26.915473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.813 [2024-07-24 19:28:26.915492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.813 [2024-07-24 19:28:26.915505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.813 [2024-07-24 19:28:26.915675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.813 [2024-07-24 19:28:26.915850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.813 [2024-07-24 19:28:26.915862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.813 [2024-07-24 19:28:26.915872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.813 [2024-07-24 19:28:26.918538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.813 [2024-07-24 19:28:26.928019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.813 [2024-07-24 19:28:26.928414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.813 [2024-07-24 19:28:26.928433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.813 [2024-07-24 19:28:26.928443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.813 [2024-07-24 19:28:26.928615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.813 [2024-07-24 19:28:26.928792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.813 [2024-07-24 19:28:26.928804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.813 [2024-07-24 19:28:26.928813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.813 [2024-07-24 19:28:26.931481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.813 [2024-07-24 19:28:26.940940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.813 [2024-07-24 19:28:26.941306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.813 [2024-07-24 19:28:26.941324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.813 [2024-07-24 19:28:26.941334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.813 [2024-07-24 19:28:26.941505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.813 [2024-07-24 19:28:26.941674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.813 [2024-07-24 19:28:26.941686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.813 [2024-07-24 19:28:26.941695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.813 [2024-07-24 19:28:26.944368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.813 [2024-07-24 19:28:26.953834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.813 [2024-07-24 19:28:26.954248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.813 [2024-07-24 19:28:26.954268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.813 [2024-07-24 19:28:26.954278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.813 [2024-07-24 19:28:26.954448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.813 [2024-07-24 19:28:26.954618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.813 [2024-07-24 19:28:26.954629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.813 [2024-07-24 19:28:26.954642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.813 [2024-07-24 19:28:26.957313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.813 [2024-07-24 19:28:26.966775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.813 [2024-07-24 19:28:26.967257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.813 [2024-07-24 19:28:26.967275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.813 [2024-07-24 19:28:26.967285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.813 [2024-07-24 19:28:26.967456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.813 [2024-07-24 19:28:26.967625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.813 [2024-07-24 19:28:26.967637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.813 [2024-07-24 19:28:26.967646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.813 [2024-07-24 19:28:26.970317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.813 [2024-07-24 19:28:26.979780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.813 [2024-07-24 19:28:26.980216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.813 [2024-07-24 19:28:26.980235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.813 [2024-07-24 19:28:26.980247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.813 [2024-07-24 19:28:26.980417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.813 [2024-07-24 19:28:26.980588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.813 [2024-07-24 19:28:26.980599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.813 [2024-07-24 19:28:26.980608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.813 [2024-07-24 19:28:26.983279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.813 [2024-07-24 19:28:26.992740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.814 [2024-07-24 19:28:26.993159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.814 [2024-07-24 19:28:26.993177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.814 [2024-07-24 19:28:26.993187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.814 [2024-07-24 19:28:26.993359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.814 [2024-07-24 19:28:26.993535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.814 [2024-07-24 19:28:26.993547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.814 [2024-07-24 19:28:26.993556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.814 [2024-07-24 19:28:26.996228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.814 [2024-07-24 19:28:27.005682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.814 [2024-07-24 19:28:27.006153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.814 [2024-07-24 19:28:27.006171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.814 [2024-07-24 19:28:27.006182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.814 [2024-07-24 19:28:27.006352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.814 [2024-07-24 19:28:27.006522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.814 [2024-07-24 19:28:27.006534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.814 [2024-07-24 19:28:27.006544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.814 [2024-07-24 19:28:27.009217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.814 [2024-07-24 19:28:27.018673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.814 [2024-07-24 19:28:27.019103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.814 [2024-07-24 19:28:27.019122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.814 [2024-07-24 19:28:27.019133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.814 [2024-07-24 19:28:27.019303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.814 [2024-07-24 19:28:27.019474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.814 [2024-07-24 19:28:27.019486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.814 [2024-07-24 19:28:27.019495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.814 [2024-07-24 19:28:27.022169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.814 [2024-07-24 19:28:27.031625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.814 [2024-07-24 19:28:27.031996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.814 [2024-07-24 19:28:27.032016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.814 [2024-07-24 19:28:27.032026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.814 [2024-07-24 19:28:27.032198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.814 [2024-07-24 19:28:27.032368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.814 [2024-07-24 19:28:27.032380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.814 [2024-07-24 19:28:27.032390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.814 [2024-07-24 19:28:27.035066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.814 [2024-07-24 19:28:27.044519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.814 [2024-07-24 19:28:27.044825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.814 [2024-07-24 19:28:27.044844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:40.814 [2024-07-24 19:28:27.044854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:40.814 [2024-07-24 19:28:27.045030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:40.814 [2024-07-24 19:28:27.045201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.814 [2024-07-24 19:28:27.045213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.814 [2024-07-24 19:28:27.045222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.814 [2024-07-24 19:28:27.047893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.074 [2024-07-24 19:28:27.057511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.074 [2024-07-24 19:28:27.057967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.074 [2024-07-24 19:28:27.057986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:41.074 [2024-07-24 19:28:27.057996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:41.074 [2024-07-24 19:28:27.058166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:41.074 [2024-07-24 19:28:27.058337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.074 [2024-07-24 19:28:27.058348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.074 [2024-07-24 19:28:27.058359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.074 [2024-07-24 19:28:27.061032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.074 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:41.074 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:41.074 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:41.074 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:41.074 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.074 [2024-07-24 19:28:27.070491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.074 [2024-07-24 19:28:27.070907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.074 [2024-07-24 19:28:27.070926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:41.074 [2024-07-24 19:28:27.070937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:41.074 [2024-07-24 19:28:27.071107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:41.074 [2024-07-24 19:28:27.071277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.074 [2024-07-24 19:28:27.071290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.074 [2024-07-24 19:28:27.071301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.074 [2024-07-24 19:28:27.073975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.074 [2024-07-24 19:28:27.083438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.074 [2024-07-24 19:28:27.083802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.074 [2024-07-24 19:28:27.083821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:41.074 [2024-07-24 19:28:27.083831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:41.074 [2024-07-24 19:28:27.084005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:41.074 [2024-07-24 19:28:27.084175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.074 [2024-07-24 19:28:27.084188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.074 [2024-07-24 19:28:27.084197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.074 [2024-07-24 19:28:27.086870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.074 [2024-07-24 19:28:27.096319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.074 [2024-07-24 19:28:27.096738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.074 [2024-07-24 19:28:27.096757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:41.074 [2024-07-24 19:28:27.096767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:41.074 [2024-07-24 19:28:27.096937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:41.074 [2024-07-24 19:28:27.097108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.074 [2024-07-24 19:28:27.097119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.074 [2024-07-24 19:28:27.097128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.074 [2024-07-24 19:28:27.099803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.074 [2024-07-24 19:28:27.109255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.074 [2024-07-24 19:28:27.109649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.075 [2024-07-24 19:28:27.109667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:41.075 [2024-07-24 19:28:27.109677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:41.075 [2024-07-24 19:28:27.109853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:41.075 [2024-07-24 19:28:27.110025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.075 [2024-07-24 19:28:27.110037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.075 [2024-07-24 19:28:27.110047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.075 [2024-07-24 19:28:27.112717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.075 [2024-07-24 19:28:27.116939] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.075 [2024-07-24 19:28:27.122177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.075 [2024-07-24 19:28:27.122524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.075 [2024-07-24 19:28:27.122543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:41.075 [2024-07-24 19:28:27.122553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:41.075 [2024-07-24 19:28:27.122732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:41.075 [2024-07-24 19:28:27.122903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.075 [2024-07-24 19:28:27.122914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.075 [2024-07-24 19:28:27.122924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.075 [2024-07-24 19:28:27.125595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.075 [2024-07-24 19:28:27.135069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.075 [2024-07-24 19:28:27.135501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.075 [2024-07-24 19:28:27.135520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:41.075 [2024-07-24 19:28:27.135530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:41.075 [2024-07-24 19:28:27.135700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:41.075 [2024-07-24 19:28:27.135877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.075 [2024-07-24 19:28:27.135889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.075 [2024-07-24 19:28:27.135898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.075 [2024-07-24 19:28:27.138566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.075 [2024-07-24 19:28:27.148026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.075 [2024-07-24 19:28:27.148527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.075 [2024-07-24 19:28:27.148547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:41.075 [2024-07-24 19:28:27.148557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:41.075 [2024-07-24 19:28:27.148734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:41.075 [2024-07-24 19:28:27.148905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.075 [2024-07-24 19:28:27.148917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.075 [2024-07-24 19:28:27.148926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.075 [2024-07-24 19:28:27.151606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.075 Malloc0 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.075 [2024-07-24 19:28:27.160911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.075 [2024-07-24 19:28:27.161328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.075 [2024-07-24 19:28:27.161346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:41.075 [2024-07-24 19:28:27.161356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:41.075 [2024-07-24 19:28:27.161525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:41.075 [2024-07-24 19:28:27.161695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.075 [2024-07-24 19:28:27.161707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.075 [2024-07-24 19:28:27.161720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.075 [2024-07-24 19:28:27.164392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.075 [2024-07-24 19:28:27.173854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.075 [2024-07-24 19:28:27.174358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.075 [2024-07-24 19:28:27.174376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf62a70 with addr=10.0.0.2, port=4420 00:27:41.075 [2024-07-24 19:28:27.174386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62a70 is same with the state(5) to be set 00:27:41.075 [2024-07-24 19:28:27.174557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf62a70 (9): Bad file descriptor 00:27:41.075 [2024-07-24 19:28:27.174733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.075 [2024-07-24 19:28:27.174745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.075 [2024-07-24 19:28:27.174755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.075 [2024-07-24 19:28:27.177431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.075 [2024-07-24 19:28:27.179097] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.075 19:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1681785 00:27:41.075 [2024-07-24 19:28:27.186886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.075 [2024-07-24 19:28:27.260965] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:51.058 00:27:51.058 Latency(us) 00:27:51.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.058 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:51.058 Verification LBA range: start 0x0 length 0x4000 00:27:51.058 Nvme1n1 : 15.01 8783.35 34.31 13431.50 0.00 5743.09 632.42 15204.35 00:27:51.058 =================================================================================================================== 00:27:51.058 Total : 8783.35 34.31 13431.50 0.00 5743.09 632.42 15204.35 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:51.058 rmmod nvme_tcp 00:27:51.058 rmmod nvme_fabrics 00:27:51.058 rmmod nvme_keyring 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1682834 ']' 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1682834 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1682834 ']' 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1682834 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1682834 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1682834' 00:27:51.058 killing process with pid 1682834 00:27:51.058 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1682834 00:27:51.059 19:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1682834 00:27:51.059 19:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:51.059 19:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:51.059 19:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:51.059 19:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.059 19:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:51.059 19:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.059 19:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.059 19:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.995 19:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:51.995 00:27:51.995 real 0m27.285s 00:27:51.995 user 1m2.117s 00:27:51.995 sys 0m8.060s 00:27:51.995 19:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:51.995 19:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:51.995 ************************************ 00:27:51.995 END TEST nvmf_bdevperf 00:27:51.995 ************************************ 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.254 ************************************ 00:27:52.254 START TEST nvmf_target_disconnect 00:27:52.254 ************************************ 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:52.254 * Looking for test storage... 00:27:52.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.254 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:52.255 19:28:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:58.832 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:58.832 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:58.832 Found net devices under 0000:af:00.0: cvl_0_0 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:58.832 Found net devices under 0000:af:00.1: cvl_0_1 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.832 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.833 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.092 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.092 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.092 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:59.092 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.092 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.092 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.092 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:59.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:27:59.092 00:27:59.092 --- 10.0.0.2 ping statistics --- 00:27:59.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.092 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:27:59.351 00:27:59.351 --- 10.0.0.1 ping statistics --- 00:27:59.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.351 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:59.351 ************************************ 00:27:59.351 START TEST nvmf_target_disconnect_tc1 00:27:59.351 ************************************ 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:27:59.351 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.352 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.352 [2024-07-24 19:28:45.526005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-07-24 19:28:45.526052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1001140 with addr=10.0.0.2, port=4420 00:27:59.352 [2024-07-24 19:28:45.526077] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:59.352 [2024-07-24 19:28:45.526094] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:59.352 [2024-07-24 19:28:45.526103] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:59.352 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:59.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:59.352 Initializing NVMe Controllers 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:59.352 00:27:59.352 real 0m0.119s 00:27:59.352 user 0m0.051s 00:27:59.352 sys 0m0.068s 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.352 ************************************ 00:27:59.352 END TEST nvmf_target_disconnect_tc1 00:27:59.352 ************************************ 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:59.352 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:59.611 ************************************ 00:27:59.611 START TEST nvmf_target_disconnect_tc2 00:27:59.611 ************************************ 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1688131 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1688131 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1688131 ']' 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.611 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.612 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.612 19:28:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.612 [2024-07-24 19:28:45.674034] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:59.612 [2024-07-24 19:28:45.674078] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.612 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.612 [2024-07-24 19:28:45.761908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:59.612 [2024-07-24 19:28:45.834776] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.612 [2024-07-24 19:28:45.834815] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.612 [2024-07-24 19:28:45.834824] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.612 [2024-07-24 19:28:45.834832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.612 [2024-07-24 19:28:45.834839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.612 [2024-07-24 19:28:45.834986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:59.612 [2024-07-24 19:28:45.835097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:59.612 [2024-07-24 19:28:45.835208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:59.612 [2024-07-24 19:28:45.835210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.550 Malloc0 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.550 [2024-07-24 19:28:46.559710] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.550 [2024-07-24 19:28:46.587945] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.550 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:00.551 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.551 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.551 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.551 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1688339 00:28:00.551 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:00.551 19:28:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:00.551 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.456 19:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1688131 00:28:02.456 19:28:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 [2024-07-24 19:28:48.617144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Write completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 [2024-07-24 19:28:48.617372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.456 Read completed with error (sct=0, sc=8) 00:28:02.456 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 [2024-07-24 19:28:48.617599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Read completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 Write completed with error (sct=0, sc=8) 00:28:02.457 starting I/O failed 00:28:02.457 [2024-07-24 19:28:48.617821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:02.457 [2024-07-24 19:28:48.618023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.618077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.618407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.618450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.618609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.618651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.618985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.619017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.619318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.619360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.619580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.619628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.619853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.619894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.620146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.620159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.620241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.620253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.620482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.620523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.620843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.620884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.621261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.621302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.621510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.621551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.621887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.621900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.457 qpair failed and we were unable to recover it. 00:28:02.457 [2024-07-24 19:28:48.622146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.457 [2024-07-24 19:28:48.622187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.622416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.622458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.622684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.622733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.622971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.623011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.623377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.623417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.623701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.623753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.624081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.624122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.624413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.624454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.624763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.624805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.625170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.625210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.625492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.625533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.625833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.625874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.626252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.626293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.626585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.626625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.626931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.626973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.627208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.627248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.627635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.627690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.628080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.628103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.628333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.628351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.628578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.628597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.628847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.628865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.629102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.629120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.629353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.629370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.629656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.629673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.629969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.629987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.630296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.630313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.630541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.630558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.630783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.630801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.631036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.631054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.631271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.631289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.631541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.631559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.631847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.631867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.632174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.632192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.632425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.632442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.632631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.632649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.458 [2024-07-24 19:28:48.632869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.458 [2024-07-24 19:28:48.632887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.458 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.633111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.633151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.633489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.633530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.633765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.633807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.634102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.634143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.634434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.634475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.634781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.634822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.635091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.635132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.635450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.635491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.635734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.635776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.636092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.636132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.636480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.636521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.636741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.636759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.637081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.637122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.637407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.637447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.637668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.637708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.637953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.637994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.638266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.638307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.638597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.638638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.638880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.638921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.639290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.639330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.639635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.639675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.640031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.640072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.640379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.640420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.640692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.640746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.641029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.641070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.641432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.641473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.641813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.641854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.642128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.642179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.642448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.642466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.642760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.642801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.643093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.643133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.643406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.643447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.643749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.643792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.644062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.644103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.459 [2024-07-24 19:28:48.644460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.459 [2024-07-24 19:28:48.644501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.459 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.644818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.644865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.645174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.645214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.645589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.645629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.645912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.645954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.646315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.646355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.646655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.646696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.647040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.647081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.647448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.647489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.647874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.647916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.648135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.648152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.648319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.648360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.648742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.648783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.649122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.649140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.649380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.649397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.649705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.649757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.650105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.650145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.650420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.650460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.650823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.650864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.651153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.651171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.651390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.651408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.651665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.651741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.652039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.652081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.652443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.652485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.652846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.652888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.653178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.653220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.653529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.653570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.653747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.653788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.654102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.654143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.654286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.654304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.654472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.654489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.654651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.654669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.654935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.654977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.655273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.655313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.655616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.655656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.460 [2024-07-24 19:28:48.656001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.460 [2024-07-24 19:28:48.656042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.460 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.656258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.656299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.656602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.656643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.656964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.657008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.657254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.657271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.657497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.657514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.657742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.657762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.657950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.657967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.658205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.658222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.658390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.658408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.658728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.658769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.659058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.659099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.659315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.659356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.659581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.659622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.659916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.659958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.660229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.660270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.660561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.660602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.660830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.660848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.661015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.661056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.661394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.661435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.661797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.661839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.662127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.662167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.662555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.662596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.662886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.662904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.663095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.663135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.663415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.663456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.663752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.663793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.664158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.664199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.664470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.461 [2024-07-24 19:28:48.664511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.461 qpair failed and we were unable to recover it. 00:28:02.461 [2024-07-24 19:28:48.664799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.664840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.665189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.665231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.665534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.665574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.665859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.665900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.666251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.666291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.666652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.666692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.666998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.667039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.667260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.667300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.667502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.667543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.667883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.667925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.668211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.668251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.668523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.668564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.668928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.668969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.669252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.669292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.669662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.669703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.669867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.669885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.670110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.670150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.670508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.670554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.670930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.670971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.671309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.671349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.671573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.671613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.671953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.671994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.672358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.672398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.672609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.672650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.672881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.672923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.673202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.673220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.673519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.673559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.673844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.673884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.674254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.674295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.674652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.674693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.675001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.675043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.675278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.675318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.675632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.675672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.675899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.675940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.676219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.462 [2024-07-24 19:28:48.676258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.462 qpair failed and we were unable to recover it. 00:28:02.462 [2024-07-24 19:28:48.676555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.676572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.676757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.676775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.676960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.676978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.677291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.677331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.677613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.677653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.677863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.677904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.678155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.678172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.678417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.678434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.678542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.678560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.678885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.678926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.679230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.679270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.679543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.679584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.679923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.679964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.680175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.680193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.680510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.680550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.680824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.680865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.681159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.681199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.681441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.681481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.681774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.681816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.682100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.682140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.682530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.682571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.682865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.682905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.683186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.683207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.683449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.683489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.683872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.683913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.684130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.684170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.684383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.684423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.684712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.684762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.685029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.685047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.685348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.685388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.685672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.685712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.686006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.686047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.686396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.686436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.686828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.686868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.687158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.687199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.463 [2024-07-24 19:28:48.687594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.463 [2024-07-24 19:28:48.687633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.463 qpair failed and we were unable to recover it. 00:28:02.464 [2024-07-24 19:28:48.687933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.464 [2024-07-24 19:28:48.687975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.464 qpair failed and we were unable to recover it. 00:28:02.464 [2024-07-24 19:28:48.688332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.464 [2024-07-24 19:28:48.688349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.464 qpair failed and we were unable to recover it. 00:28:02.464 [2024-07-24 19:28:48.688672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.464 [2024-07-24 19:28:48.688690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.464 qpair failed and we were unable to recover it. 00:28:02.464 [2024-07-24 19:28:48.688868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.464 [2024-07-24 19:28:48.688912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.464 qpair failed and we were unable to recover it. 00:28:02.464 [2024-07-24 19:28:48.689132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.464 [2024-07-24 19:28:48.689172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.464 qpair failed and we were unable to recover it. 00:28:02.464 [2024-07-24 19:28:48.689410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.464 [2024-07-24 19:28:48.689450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.464 qpair failed and we were unable to recover it. 00:28:02.464 [2024-07-24 19:28:48.689788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.464 [2024-07-24 19:28:48.689806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.464 qpair failed and we were unable to recover it. 00:28:02.464 [2024-07-24 19:28:48.690117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.464 [2024-07-24 19:28:48.690157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.464 qpair failed and we were unable to recover it. 00:28:02.464 [2024-07-24 19:28:48.690453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.464 [2024-07-24 19:28:48.690493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.464 qpair failed and we were unable to recover it. 00:28:02.464 [2024-07-24 19:28:48.690737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.464 [2024-07-24 19:28:48.690778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.464 qpair failed and we were unable to recover it. 00:28:02.464 [2024-07-24 19:28:48.690992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.464 [2024-07-24 19:28:48.691010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.464 qpair failed and we were unable to recover it. 00:28:02.464 [2024-07-24 19:28:48.691265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.464 [2024-07-24 19:28:48.691305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.464 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.691542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.691583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.691868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.691910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.692252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.692293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.692571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.692612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.692975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.693016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.693373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.693391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.693629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.693669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.694006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.694047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.694272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.694312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.694675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.694726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.695068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.695109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.695431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.695472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.695760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.695802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.696077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.696095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.696311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.696332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.696487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.696503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.696744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.696785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.697141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.697158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.697404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.697444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.697734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.697775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.698066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.698106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.698400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.698440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.698829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.698876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.699030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.699048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.699372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.699412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.699683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.699734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.736 qpair failed and we were unable to recover it. 00:28:02.736 [2024-07-24 19:28:48.700041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.736 [2024-07-24 19:28:48.700081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.700351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.700391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.700792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.700834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.701106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.701147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.701417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.701457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.701675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.701725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.702037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.702079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.702418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.702458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.702819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.702860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.703171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.703188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.703504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.703545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.703905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.703946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.704247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.704288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.704580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.704621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.704932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.704973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.705200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.705241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.705534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.705575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.705788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.705829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.706124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.706166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.706455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.706495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.706787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.706828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.707057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.707097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.707455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.707495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.707819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.707860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.708223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.708264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.708551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.708592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.708930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.708979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.709279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.709319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.709594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.709640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.709893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.709934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.710244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.710284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.710645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.710686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.710977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.711018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.711320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.711360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.711761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.711804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.737 qpair failed and we were unable to recover it. 00:28:02.737 [2024-07-24 19:28:48.712167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.737 [2024-07-24 19:28:48.712208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.712428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.712445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.712692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.712737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.713053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.713092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.713301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.713346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.713586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.713604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.713823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.713841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.714013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.714031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.714209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.714249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.714588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.714629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.714912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.714954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.715223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.715241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.715459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.715477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.715657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.715674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.715972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.716013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.716384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.716424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.716630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.716670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.717030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.717071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.717342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.717383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.717730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.717771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.718115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.718161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.718397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.718415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.718657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.718674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.718911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.718929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.719095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.719112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.719447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.719487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.719664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.719705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.719996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.720043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.720359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.720400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.720740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.720781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.721049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.721090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.721307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.721325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.721561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.721601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.721871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.721917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.722185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.722226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.738 qpair failed and we were unable to recover it. 00:28:02.738 [2024-07-24 19:28:48.722523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.738 [2024-07-24 19:28:48.722565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.722907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.722949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.723245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.723263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.723510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.723546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.723777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.723818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.724111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.724151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.724441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.724482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.724822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.724863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.725170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.725210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.725474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.725492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.725733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.725751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.725854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.725871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.726108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.726126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.726315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.726333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.726560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.726578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.726817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.726835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.727133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.727174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.727449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.727490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.727778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.727819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.728084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.728102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.728282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.728299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.728521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.728562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.728851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.728892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.729180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.729221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.729443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.729483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.729941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.730021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.730395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.730440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.730782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.730826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.731182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.731223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.731509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.731549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.731845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.731886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.732231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.732272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.732555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.732596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.732879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.732920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.733282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.733323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.739 [2024-07-24 19:28:48.733624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.739 [2024-07-24 19:28:48.733637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.739 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.733941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.733983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.734207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.734247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.734559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.734608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.734818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.734859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.735219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.735259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.735465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.735478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.735713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.735764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.736033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.736074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.736338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.736351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.736506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.736518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.736739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.736779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.737095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.737135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.737417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.737458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.737845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.737887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.738225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.738265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.738504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.738517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.738845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.738887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.739249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.739289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.739651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.739691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.740060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.740102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.740467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.740507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.740786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.740827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.741104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.741117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.741334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.741347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.741624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.741636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.741953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.741994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.742214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.742255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.742541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.742581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.742893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.742934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.743269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.743360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.743773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.743821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.744064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.744082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.744319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.744359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.744648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.744689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.740 [2024-07-24 19:28:48.745011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.740 [2024-07-24 19:28:48.745053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:02.740 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.745366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.745405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.745633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.745673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.746036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.746055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.746304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.746344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.746631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.746672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.746981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.747061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.747441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.747484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.747712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.747763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.747987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.748025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.748329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.748370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.748709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.748757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.749075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.749115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.749416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.749457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.749784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.749825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.750204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.750245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.750631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.750671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.750987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.751028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.751245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.751286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.751567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.751607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.751917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.751959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.752247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.752287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.752668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.752708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.753079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.753120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.753470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.753510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.753853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.753894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.754169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.754209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.741 qpair failed and we were unable to recover it. 00:28:02.741 [2024-07-24 19:28:48.754547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.741 [2024-07-24 19:28:48.754587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.754950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.754991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.755264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.755304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.755573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.755614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.755849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.755890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.756198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.756239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.756577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.756617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.756907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.756948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.757260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.757305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.757632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.757671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.758048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.758089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.758392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.758433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.758803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.758843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.759120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.759161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.759498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.759537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.759827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.759868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.760157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.760197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.760545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.760585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.760964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.761005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.761263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.761276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.761499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.761512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.761788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.761801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.761944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.761958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.762274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.762288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.762589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.762629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.762913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.762954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.742 qpair failed and we were unable to recover it. 00:28:02.742 [2024-07-24 19:28:48.763315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.742 [2024-07-24 19:28:48.763356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.763626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.763666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.763987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.764028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.764370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.764410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.764728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.764770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.764984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.765024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.765258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.765298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.765580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.765621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.765906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.765947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.766295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.766336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.766626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.766666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.767066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.767108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.767341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.767381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.767743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.767784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.768145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.768185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.768478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.768491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.768769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.768810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.769116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.769156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.769469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.769509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.769871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.769912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.770131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.770172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.770562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.770603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.770960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.771007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.771293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.771305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.771596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.771635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.771934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.771975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.772275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.772314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.743 [2024-07-24 19:28:48.772653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.743 [2024-07-24 19:28:48.772693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.743 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.773015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.773056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.773283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.773323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.773616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.773656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.774042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.774084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.774472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.774512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.774851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.774892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.775197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.775237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.775530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.775570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.775936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.775978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.776339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.776379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.776598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.776611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.776779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.776793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.777033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.777047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.777293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.777307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.777516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.777530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.777771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.777784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.777929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.777941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.778174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.778188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.778412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.778425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.778580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.778593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.778879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.778915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.779202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.779242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.779525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.779564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.779934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.779975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.780265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.780305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.780540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.744 [2024-07-24 19:28:48.780580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.744 qpair failed and we were unable to recover it. 00:28:02.744 [2024-07-24 19:28:48.780863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.780903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.781199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.781239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.781552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.781592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.781915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.781956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.782234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.782273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.782546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.782586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.782971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.783012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.783371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.783411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.783795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.783841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.784182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.784222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.784497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.784510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.784795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.784835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.785170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.785210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.785529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.785569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.785981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.786022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.786309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.786350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.786683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.786733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.787021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.787062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.787418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.787458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.787748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.787788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.788147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.788186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.788538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.788578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.789003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.789044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.789405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.789444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.789797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.789810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.790036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.745 [2024-07-24 19:28:48.790050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.745 qpair failed and we were unable to recover it. 00:28:02.745 [2024-07-24 19:28:48.790359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.790372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.790697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.790749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.791084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.791124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.791409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.791422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.791653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.791694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.792047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.792087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.792433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.792473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.792854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.792895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.793121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.793162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.793393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.793433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.793771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.793811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.794177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.794190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.794495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.794535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.794821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.794861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.795144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.795184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.795460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.795500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.795861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.795901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.796051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.796091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.796318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.796342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.796508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.796521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.796739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.796780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.797077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.797117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.797390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.797435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.797788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.797829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.798115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.798155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.798440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.746 [2024-07-24 19:28:48.798481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.746 qpair failed and we were unable to recover it. 00:28:02.746 [2024-07-24 19:28:48.798827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.798868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.799229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.799269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.799654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.799694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.799925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.799966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.800252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.800293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.800591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.800630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.800981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.801022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.801305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.801345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.801724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.801764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.802049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.802095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.802381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.802393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.802728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.802769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.803119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.803159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.803509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.803522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.803813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.803854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.804216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.804257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.804535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.804547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.804767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.804780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.805082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.805122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.805424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.805436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.805591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.805603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.805831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.805852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.806055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.806068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.806375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.806415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.806754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.806795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.807106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.747 [2024-07-24 19:28:48.807146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.747 qpair failed and we were unable to recover it. 00:28:02.747 [2024-07-24 19:28:48.807507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.807549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.807897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.807910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.808208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.808247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.808476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.808516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.808786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.808826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.809120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.809161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.809392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.809432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.809765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.809806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.810080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.810120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.810486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.810527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.810818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.810865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.811141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.811188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.811434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.811463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.811681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.811731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.812093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.812132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.812472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.812511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.812806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.812847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.813081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.813121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.813403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.813442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.813759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.813799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.814142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.814183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.814396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.814436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.814739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.814780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.815119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.815159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.815519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.815532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.815765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.815779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.815990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.816003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.816221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.816234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.816420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.816433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.816613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.816652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.816946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.816987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.817360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.817400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.817673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.817685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.817849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.817863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.818122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.748 [2024-07-24 19:28:48.818162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.748 qpair failed and we were unable to recover it. 00:28:02.748 [2024-07-24 19:28:48.818523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.818563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.818847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.818860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.819095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.819136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.819404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.819444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.819737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.819777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.820068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.820107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.820350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.820398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.820673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.820686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.820942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.820956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.821201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.821214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.821517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.821530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.821833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.821886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.822034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.822074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.822417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.822458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.822751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.822792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.823075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.823121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.823375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.823388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.823546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.823559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.823785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.823798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.824032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.824072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.824341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.824354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.824561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.824574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.824792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.824805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.824967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.824980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.825235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.825275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.825564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.825605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.825893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.825934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.826212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.826252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.826524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.826574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.826867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.826909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.827266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.827306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.827587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.827599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.827801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.827814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.827990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.828030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.828259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.749 [2024-07-24 19:28:48.828299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.749 qpair failed and we were unable to recover it. 00:28:02.749 [2024-07-24 19:28:48.828658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.828698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.829068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.829108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.829393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.829426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.829798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.829838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.830140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.830180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.830425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.830438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.830599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.830640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.831033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.831073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.831349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.831362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.831567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.831580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.831793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.831806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.831980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.832020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.832353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.832376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.832537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.832550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.832722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.832736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.832970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.833010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.833347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.833387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.833743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.833786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.834093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.834106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.834276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.834290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.834522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.834567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.834784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.834824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.835049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.835089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.835393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.835432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.835759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.835799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.836158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.836199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.836469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.836481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.836723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.836736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.837039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.837078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.837350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.837391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.837723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.837747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.838075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.838087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.838403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.838442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.838786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.838826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.839189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.839230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.750 qpair failed and we were unable to recover it. 00:28:02.750 [2024-07-24 19:28:48.839436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.750 [2024-07-24 19:28:48.839477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.839838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.839878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.840224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.840264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.840540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.840554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.840843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.840884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.841170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.841211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.841330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.841343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.841579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.841620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.841912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.841952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.842175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.842214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.842471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.842484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.842708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.842731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.843032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.843073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.843421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.843460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.843764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.843805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.844155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.844195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.844487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.844527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.844756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.844798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.845162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.845202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.845561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.845601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.845972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.846013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.846285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.846324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.846681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.846729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.847074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.847115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.847402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.847414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.847574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.847589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.847811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.847825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.848122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.848161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.848521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.848561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.848908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.848948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.849230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.849270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.849631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.849672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.849909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.849949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.850331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.850371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.850723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.850764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.751 qpair failed and we were unable to recover it. 00:28:02.751 [2024-07-24 19:28:48.851137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.751 [2024-07-24 19:28:48.851176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.851446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.851459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.851671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.851684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.851976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.852016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.852299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.852339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.852657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.852697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.852988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.853029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.853310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.853350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.853567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.853606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.853902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.853915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.854093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.854133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.854493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.854533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.854805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.854819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.855066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.855111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.855396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.855435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.855733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.855775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.855983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.856023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.856314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.856355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.856560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.856600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.856951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.856992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.857332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.857373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.857701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.857720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.857939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.857953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.858250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.858291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.858632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.858672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.858953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.858994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.859336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.859376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.859591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.859631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.859993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.752 [2024-07-24 19:28:48.860034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.752 qpair failed and we were unable to recover it. 00:28:02.752 [2024-07-24 19:28:48.860249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.860288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.860658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.860698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.861029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.861070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.861429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.861468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.861686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.861699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.861961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.861974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.862280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.862321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.862661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.862699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.862873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.862886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.863103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.863143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.863494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.863533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.863811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.863824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.864038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.864051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.864304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.864317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.864472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.864485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.864633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.864646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.864861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.864901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.865173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.865213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.865500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.865539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.865829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.865870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.866230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.866270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.866602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.866643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.866938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.866979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.867274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.867313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.867616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.867656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.867889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.867929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.868224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.868264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.868553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.868593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.868932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.868983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.869273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.869313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.869558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.869571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.869879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.869920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.870214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.870254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.753 [2024-07-24 19:28:48.870426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.753 [2024-07-24 19:28:48.870439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.753 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.870700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.870713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.870912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.870925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.871166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.871179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.871453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.871466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.871626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.871639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.871847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.871861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.871964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.871977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.872253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.872270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.872444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.872458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.872769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.872783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.873015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.873028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.873248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.873263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.873434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.873476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.873711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.873763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.874106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.874146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.874371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.874413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.874696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.874710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.874992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.875006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.875295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.875309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.875558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.875600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.875874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.875915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.876228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.876269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.876610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.876662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.877065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.877105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.877393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.877433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.877757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.877807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.878108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.878149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.878526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.878568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.878854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.878894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.879120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.879160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.879447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.879487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.879818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.879832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.880131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.880144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.880340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.880379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.754 [2024-07-24 19:28:48.880740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.754 [2024-07-24 19:28:48.880793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.754 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.881133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.881173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.881525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.881538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.881821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.881834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.882091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.882131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.882414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.882454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.882693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.882743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.883054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.883094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.883386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.883427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.883772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.883786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.884074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.884087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.884385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.884425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.884767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.884807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.885080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.885120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.885401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.885440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.885730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.885743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.885898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.885911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.886157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.886170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.886425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.886465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.886681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.886732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.887017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.887057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.887337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.887375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.887529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.887542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.887753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.887766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.887988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.888001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.888286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.888326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.888599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.888639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.889031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.889045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.889273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.889286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.889449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.889462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.889632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.889645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.889796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.889810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.890027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.890066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.890344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.890383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.890671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.755 [2024-07-24 19:28:48.890711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.755 qpair failed and we were unable to recover it. 00:28:02.755 [2024-07-24 19:28:48.891009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.891050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.891260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.891300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.891585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.891640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.891880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.891893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.892121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.892134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.892418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.892464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.892687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.892733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.893114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.893127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.893351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.893363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.893520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.893533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.893692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.893705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.893848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.893861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.894039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.894052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.894336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.894376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.894529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.894568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.894844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.894858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.895016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.895030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.895256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.895269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.895513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.895526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.895832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.895873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.896149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.896189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.896476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.896520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.896617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.896630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.896789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.896822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.897101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.897141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.897507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.897547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.897823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.897863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.898237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.898276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.898616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.898656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.898880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.898893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.899152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.899192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.899485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.899525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.899746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.899760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.900006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.900049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.900321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.900360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.756 [2024-07-24 19:28:48.900645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.756 [2024-07-24 19:28:48.900685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.756 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.900985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.901027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.901308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.901348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.901643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.901656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.901862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.901875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.902114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.902127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.902404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.902417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.902600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.902613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.902763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.902803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.903170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.903211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.903492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.903537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.903818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.903831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.904130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.904143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.904360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.904373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.904523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.904536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.904701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.904718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.904930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.904944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.905219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.905232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.905463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.905502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.905781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.905822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.906186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.906226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.906511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.906551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.906847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.906861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.907109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.907122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.907301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.907314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.907583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.907597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.907832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.907845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.908147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.908187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.908513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.908552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.908831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.757 [2024-07-24 19:28:48.908885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.757 qpair failed and we were unable to recover it. 00:28:02.757 [2024-07-24 19:28:48.909134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.909147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.909367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.909380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.909690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.909740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.910012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.910052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.910345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.910384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.910666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.910679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.910859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.910873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.911028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.911042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.911263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.911276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.911510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.911550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.911753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.911793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.912152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.912192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.912543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.912556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.912784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.912797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.913073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.913086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.913363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.913377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.913544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.913557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.913785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.913825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.914106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.914147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.914441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.914454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.914708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.914727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.914870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.914884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.915065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.915078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.915320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.915333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.915538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.915551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.915860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.915901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.916193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.916233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.916447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.916487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.916825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.916866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.917204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.917245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.917573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.917587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.917820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.917861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.918149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.918189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.918484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.918524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.758 qpair failed and we were unable to recover it. 00:28:02.758 [2024-07-24 19:28:48.918759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.758 [2024-07-24 19:28:48.918801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.919166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.919206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.919569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.919583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.919829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.919842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.920051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.920064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.920290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.920303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.920464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.920477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.920778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.920792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.921026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.921039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.921200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.921213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.921502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.921515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.921814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.921827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.921988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.922001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.922240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.922253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.922472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.922485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.922626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.922639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.922874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.922887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.923094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.923107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.923430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.923443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.923606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.923619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.923835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.923848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.924008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.924021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.924126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.924138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.924366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.924378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.924551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.924564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.924773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.924786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.925017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.925032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.925245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.925258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.925476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.925489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.925696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.925709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.925885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.925898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.926178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.926190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.926404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.926417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.926690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.926764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.927129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.927170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.759 [2024-07-24 19:28:48.927549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.759 [2024-07-24 19:28:48.927589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.759 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.927958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.927972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.928246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.928260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.928414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.928427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.928649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.928662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.928886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.928900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.929146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.929186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.929354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.929394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.929691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.929704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.929855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.929869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.930099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.930147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.930506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.930546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.930776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.930816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.931109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.931149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.931486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.931526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.931882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.931922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.932223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.932263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.932576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.932616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.932992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.933033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.933395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.933435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.933653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.933693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.934064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.934077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.934384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.934425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.934632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.934645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.934920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.934934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.935157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.935170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.935353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.935392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.935679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.935729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.936084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.936098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.936252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.936265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.936490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.936503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.936801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.936838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.937015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.937054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.937259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.937299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.937602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.937615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.937861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.760 [2024-07-24 19:28:48.937874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.760 qpair failed and we were unable to recover it. 00:28:02.760 [2024-07-24 19:28:48.938028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.938041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.938262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.938275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.938487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.938500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.938671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.938685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.938904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.938917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.939092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.939129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.939504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.939544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.939886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.939899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.940119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.940132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.940384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.940397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.940614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.940627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.940767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.940780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.941055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.941068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.941290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.941329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.941565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.941605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.941826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.941870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.942024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.942037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.942318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.942358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.942643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.942694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.943016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.943029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.943253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.943266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.943427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.943440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.943593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.943606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.943823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.943864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.944145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.944185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.944412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.944451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.944747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.944787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.945146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.945186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.945471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.945512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.945753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.945766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.946073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.946113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.946330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.946370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.946728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.946768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.947087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.947127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.947496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.761 [2024-07-24 19:28:48.947536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.761 qpair failed and we were unable to recover it. 00:28:02.761 [2024-07-24 19:28:48.947821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.947836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.948046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.948059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.948294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.948307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.948530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.948544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.948774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.948788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.949016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.949056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.949328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.949369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.949755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.949795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.950115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.950155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.950451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.950502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.950663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.950676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.950963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.950977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.951280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.951320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.951605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.951645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.951937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.951951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.952202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.952243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.952600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.952640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.952867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.952908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.953132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.953172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.953394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.953435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.953766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.953780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.954006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.954019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.954176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.954190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.954329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.954342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.954654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.954693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.955043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.955083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.955364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.955404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.955682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.955751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.955972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.956012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.762 [2024-07-24 19:28:48.956302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.762 [2024-07-24 19:28:48.956342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.762 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.956584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.956625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.956983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.957024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.957366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.957406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.957797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.957838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.958196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.958237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.958619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.958659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.958920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.958934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.959147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.959187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.959474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.959515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.959834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.959847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.960876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.960901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.961225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.961269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.961495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.961537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.961904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.961945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:02.763 [2024-07-24 19:28:48.962177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.763 [2024-07-24 19:28:48.962217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:02.763 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.962505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.962546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.962839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.962880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.963167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.963209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.963549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.963589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.963888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.963929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.964229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.964270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.964565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.964606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.964880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.964894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.965007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.965020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.965270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.965285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.965448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.965463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.965621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.965662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.965908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.965950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.966311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.966351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.966693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.966758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.967118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.036 [2024-07-24 19:28:48.967157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.036 qpair failed and we were unable to recover it. 00:28:03.036 [2024-07-24 19:28:48.967522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.967562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.967856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.967897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.968233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.968273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.968597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.968637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.968971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.968984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.969211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.969251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.969529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.969570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.969828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.969841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.970003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.970016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.970185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.970225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.970469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.970508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.970803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.970844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.971046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.971086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.971360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.971400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.971690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.971743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.972014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.972026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.972318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.972358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.972645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.972686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.972857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.972898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.973128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.973143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.973451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.973490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.973862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.973902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.974196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.974210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.974365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.974378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.974548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.974587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.974815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.974856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.975220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.975260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.975473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.975513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.975876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.975916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.976134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.976175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.976536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.976576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.976928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.976969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.977321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.977362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.977710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.977761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.978098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.037 [2024-07-24 19:28:48.978111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.037 qpair failed and we were unable to recover it. 00:28:03.037 [2024-07-24 19:28:48.978362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.978412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.978619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.978659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.978976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.979017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.979377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.979418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.979756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.979798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.980108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.980121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.980404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.980417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.980701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.980718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.981007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.981047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.981285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.981325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.981611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.981651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.982010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.982051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.982440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.982480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.982819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.982832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.983069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.983109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.983392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.983431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.983730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.983771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.984054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.984094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.984385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.984425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.984787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.984828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.985047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.985086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.985355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.985394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.985736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.985777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.986104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.986144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.986474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.986520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.986755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.986796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.987151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.987191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.987427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.987467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.987695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.987708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.987902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.987942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.988304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.988344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.988622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.988663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.988900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.988940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.989165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.989205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.038 [2024-07-24 19:28:48.989551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.038 [2024-07-24 19:28:48.989591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.038 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.989933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.989975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.990187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.990226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.990496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.990535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.990842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.990855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.991156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.991169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.991397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.991437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.991657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.991696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.992065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.992105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.992471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.992511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.992744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.992758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.992983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.993023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.993300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.993341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.993705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.993754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.994090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.994130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.994413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.994453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.994812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.994853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.995137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.995151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.995433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.995473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.995762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.995803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.996061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.996074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.996377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.996418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.996636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.996676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.996977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.997018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.997304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.997343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.997735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.997775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.998138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.998178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.998518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.998558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.998832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.998845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.999022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.999035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.999339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.999385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:48.999737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:48.999778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:49.000054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:49.000095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:49.000398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:49.000438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:49.000782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:49.000823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.039 [2024-07-24 19:28:49.001186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.039 [2024-07-24 19:28:49.001226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.039 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.001512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.001552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.001946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.001987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.002326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.002366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.002745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.002786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.002971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.002984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.003193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.003207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.003406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.003420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.003708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.003755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.004148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.004188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.004526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.004567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.004840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.004880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.005158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.005171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.005420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.005461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.005750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.005763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.006058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.006098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.006410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.006449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.006666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.006705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.007023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.007063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.007422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.007462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.007759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.007800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.008167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.008180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.008511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.008524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.008686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.008700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.009012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.009026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.009276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.009290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.009477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.009490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.009726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.009757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.009968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.009991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.010213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.010227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.010398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.010411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.010632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.010645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.010858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.010882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.011130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.011143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.011295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.011308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.040 [2024-07-24 19:28:49.011526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.040 [2024-07-24 19:28:49.011539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.040 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.011819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.011832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.012124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.012163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.012450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.012490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.012778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.012818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.013180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.013220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.013522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.013562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.013832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.013873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.014169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.014208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.014494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.014534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.014751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.014792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.015078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.015115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.015369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.015382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.015609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.015622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.015725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.015754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.016041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.016081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.016369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.016409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.016779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.016820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.017120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.017133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.017359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.017372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.017527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.017540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.017790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.017803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.018093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.018133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.018294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.018335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.018606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.018657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.018909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.018951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.019237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.019277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.019443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.019489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.041 qpair failed and we were unable to recover it. 00:28:03.041 [2024-07-24 19:28:49.019761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.041 [2024-07-24 19:28:49.019802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.020156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.020191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.020481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.020521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.020800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.020841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.021130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.021171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.021475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.021515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.021795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.021808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.021969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.021982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.022205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.022245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.022462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.022502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.023167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.023184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.023493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.023507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.023810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.023851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.024092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.024131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.024416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.024456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.024750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.024791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.025080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.025120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.025505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.025545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.025785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.025799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.026132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.026172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.026455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.026494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.026789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.026830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.027121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.027161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.027436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.027475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.027814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.027855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.028155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.028194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.028494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.028534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.028894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.028935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.029210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.029250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.029588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.029628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.029805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.029846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.030133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.030146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.030377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.030389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.030630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.030643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.030947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.030987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.042 qpair failed and we were unable to recover it. 00:28:03.042 [2024-07-24 19:28:49.031276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.042 [2024-07-24 19:28:49.031317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.031619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.031659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.031890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.031931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.032155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.032167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.032378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.032393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.032609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.032649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.032938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.032979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.033374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.033413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.033688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.033737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.033959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.033999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.034144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.034157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.034403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.034443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.034738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.034780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.035135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.035176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.035448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.035488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.035839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.035880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.036280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.036320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.036649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.036689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.036957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.036970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.037294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.037334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.037672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.037713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.037928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.037968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.038334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.038373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.038641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.038681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.038896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.038937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.039226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.039266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.039631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.039671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.040075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.040117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.040327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.040368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.040648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.040687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.040995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.041036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.041318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.041359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.041596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.041635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.041915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.041956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.042316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.042356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.043 [2024-07-24 19:28:49.042639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.043 [2024-07-24 19:28:49.042678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.043 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.042992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.043034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.043199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.043238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.043528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.043568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.043770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.043784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.044023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.044063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.044337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.044377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.044741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.044782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.045144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.045185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.045390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.045436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.045782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.045823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.046175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.046215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.046601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.046641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.046924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.046937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.047089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.047128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.047351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.047391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.047683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.047741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.048080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.048120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.048506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.048546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.048892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.048933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.049219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.049257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.049595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.049637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.049957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.049998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.050381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.050422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.050805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.050818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.051124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.051163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.051447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.051487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.051798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.051839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.052124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.052165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.052553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.052593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.044 [2024-07-24 19:28:49.052878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.044 [2024-07-24 19:28:49.052891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.044 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.053082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.053122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.053487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.053527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.053813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.053853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.054145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.054185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.054465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.054506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.054787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.054800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.054971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.055011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.055371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.055412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.055705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.055753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.056043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.056083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.056439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.056474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.056836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.056878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.057254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.057293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.057574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.057613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.057793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.057807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.057968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.058007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.058385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.058425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.058695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.058742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.059048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.059094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.059379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.059419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.059808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.059848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.060210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.060250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.060593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.060633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.060923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.060964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.061184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.045 [2024-07-24 19:28:49.061197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.045 qpair failed and we were unable to recover it. 00:28:03.045 [2024-07-24 19:28:49.061429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.061468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.061832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.061873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.062114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.062128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.062323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.062336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.062567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.062607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.062977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.063016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.063286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.063325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.063619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.063659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.064028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.064070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.064446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.064486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.064773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.064813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.065024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.065064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.065354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.065384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.065774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.065815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.066190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.066230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.066576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.066616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.066950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.066963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.067256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.067296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.067522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.067562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.067791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.067831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.068086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.068099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.068316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.068329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.068628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.068641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.068937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.068950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.069183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.069223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.046 qpair failed and we were unable to recover it. 00:28:03.046 [2024-07-24 19:28:49.069526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.046 [2024-07-24 19:28:49.069567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.069782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.069823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.070164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.070203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.070492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.070532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.070807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.070848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.071059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.071099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.071408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.071447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.071735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.071776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.072098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.072144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.072363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.072402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.072695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.072762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.073052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.073092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.073452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.073492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.073800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.073841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.074180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.074220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.074444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.074484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.074769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.074810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.075173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.075213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.075520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.075559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.047 qpair failed and we were unable to recover it. 00:28:03.047 [2024-07-24 19:28:49.075924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-07-24 19:28:49.075965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.076333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.076373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.076687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.076736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.076961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.077001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.077367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.077406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.077690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.077752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.078007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.078020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.078295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.078307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.078558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.078571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.078794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.078808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.078968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.078981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.079286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.079326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.079614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.079654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.079817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.079857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.080137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.080176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.080427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.080440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.080758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.080798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.080975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.081015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.081228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.081268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.081550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.081590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.081929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.081970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.082335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.082375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.082671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.082712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.083031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.083072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.083422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.083462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.083783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.048 [2024-07-24 19:28:49.083824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.048 qpair failed and we were unable to recover it. 00:28:03.048 [2024-07-24 19:28:49.084113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.084154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.084491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.084530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.084890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.084930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.085217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.085262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.085550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.085590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.085927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.085968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.086294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.086306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.086533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.086546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.086790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.086803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.087125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.087164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.087456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.087497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.087732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.087773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.088136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.088175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.088464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.088505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.088890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.088931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.089270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.089310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.089675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.089726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.089921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.089934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.090214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.090253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.090612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.090653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.091054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.091094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.091377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.091390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.091684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.091697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.091942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.091984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.092270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.092310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.092605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.049 [2024-07-24 19:28:49.092645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.049 qpair failed and we were unable to recover it. 00:28:03.049 [2024-07-24 19:28:49.092949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.092990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.093318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.093342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.093646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.093686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.094028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.094069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.094286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.094299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.094605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.094645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.094996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.095037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.095250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.095290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.095561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.095601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.095766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.095807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.096079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.096118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.096417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.096457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.096739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.096780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.097046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.097059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.097364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.097404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.097703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.097764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.097972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.097985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.098143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.098157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.098451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.098491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.098767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.098808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.098968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.098981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.099201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.099241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.099474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.099515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.099851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.099892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.100168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.100208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.100571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.100611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.100873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.100887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.101118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.101157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.101375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.101414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.101752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.101787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.102013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.102026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.102337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.102378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.102604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.102644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.102989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.103029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.103414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.050 [2024-07-24 19:28:49.103455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.050 qpair failed and we were unable to recover it. 00:28:03.050 [2024-07-24 19:28:49.103762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.103802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.104106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.104146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.104449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.104490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.104711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.104759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.105079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.105108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.105418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.105459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.105824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.105865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.106152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.106192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.106531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.106572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.106940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.106981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.107285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.107324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.107676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.107726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.108010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.108050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.108395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.108435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.108775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.108816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.109117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.109156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.109389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.109429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.109698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.109745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.110027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.110067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.110291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.110331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.110607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.110646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.110919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.110932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.111093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.111108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.111399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.111438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.111749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.111790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.111999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.112012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.112193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.112233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.112570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.112610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.112945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.112986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.113251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.113264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.113566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.113606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.113878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.113919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.114139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.114179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.114451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.114491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.114783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.051 [2024-07-24 19:28:49.114822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.051 qpair failed and we were unable to recover it. 00:28:03.051 [2024-07-24 19:28:49.115112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.115152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.115537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.115550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.115837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.115850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.116158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.116197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.116486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.116525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.116809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.116849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.117217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.117257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.117579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.117618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.117994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.118034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.118338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.118378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.118742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.118782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.119002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.119042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.119370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.119383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.119611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.119624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.119847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.119860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.120097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.120110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.120275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.120288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.120541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.120554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.120737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.120750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.121009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.121049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.121433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.121473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.121757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.121798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.122095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.122142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.122361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.122374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.122616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.122629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.122846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.122887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.123178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.123218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.123457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.123472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.123748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.123761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.123936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.123949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.124161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.052 [2024-07-24 19:28:49.124174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.052 qpair failed and we were unable to recover it. 00:28:03.052 [2024-07-24 19:28:49.124408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.124421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.124664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.124677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.124948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.124961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.125205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.125219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.125438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.125451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.125608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.125620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.125893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.125906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.126073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.126086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.126419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.126433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.126672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.126684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.126875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.126889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.127147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.127187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.127475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.127515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.127853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.127893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.128167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.128180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.128477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.128491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.128699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.128712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.128936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.128949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.129167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.129180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.129342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.129355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.129593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.129633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.129937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.129978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.130175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.130188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.130488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.130501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.130734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.130748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.130978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.130991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.131179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.131219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.131577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.131617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.131887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.131927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.132218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.132231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.132441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.132454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.132778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.132791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.053 [2024-07-24 19:28:49.133029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.053 [2024-07-24 19:28:49.133042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.053 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.133147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.133158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.133388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.133400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.133631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.133671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.133896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf2210 is same with the state(5) to be set 00:28:03.054 [2024-07-24 19:28:49.134323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.134360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.134675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.134694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.134991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.135009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.135179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.135196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.135515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.135532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.135766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.135784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.135971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.135986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.136204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.136217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.136385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.136398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.136617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.136630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.136912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.136953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.137257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.137296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.137509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.137549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.137817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.137863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.138161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.138174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.138466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.138478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.138777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.138790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.139036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.139049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.139264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.139303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.139612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.139651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.140013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.140026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.140201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.140213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.140439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.140478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.140777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.140818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.141173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.141186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.141412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.141425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.141711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.141759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.142094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.142134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.142356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.142396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.142764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.054 [2024-07-24 19:28:49.142805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.054 qpair failed and we were unable to recover it. 00:28:03.054 [2024-07-24 19:28:49.143105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.143118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.143327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.143340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.143645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.143684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.143984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.144024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.144328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.144367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.144673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.144726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.145025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.145065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.145408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.145448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.145808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.145849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.146130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.146143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.146323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.146400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.146725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.146771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.146938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.146980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.147270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.147311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.147672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.147713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.147960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.148001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.148335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.148354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.148592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.148610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.148841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.148859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.149078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.149096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.149431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.149449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.149780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.149797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.150014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.150031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.150196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.150219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.150464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.150481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.150666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.150681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.150994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.151035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.151357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.151370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.151601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.151615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.151869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.151882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.152023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.152036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.152355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.152395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.152752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.152794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.153145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.153158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.153424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.055 [2024-07-24 19:28:49.153437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.055 qpair failed and we were unable to recover it. 00:28:03.055 [2024-07-24 19:28:49.153671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.153711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.153969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.154010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.154374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.154414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.154754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.154795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.155097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.155138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.155487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.155500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.155695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.155742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.156113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.156154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.156527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.156567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.156855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.156895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.157106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.157146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.157443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.157483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.157849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.157895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.158257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.158297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.158600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.158613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.158808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.158821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.159050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.159091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.159307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.159347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.159757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.159799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.160162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.160202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.160486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.160526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.160906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.160947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.161166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.161180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.161414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.161454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.161679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.161726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.162028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.162068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.162368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.162408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.162767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.162807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.163090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.163137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.163461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.163475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.163666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.163679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.163904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.163917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.164061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.164074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.164308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.164322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.164532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.164545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.056 [2024-07-24 19:28:49.164765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.056 [2024-07-24 19:28:49.164779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.056 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.165012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.165025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.165317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.165330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.165540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.165553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.165830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.165844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.166121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.166134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.166423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.166436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.166605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.166618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.166833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.166846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.167027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.167040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.167188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.167201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.167498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.167511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.167786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.167799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.167942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.167955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.168205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.168245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.168602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.168642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.169020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.169062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.169346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.169386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.169666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.169706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.170087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.170100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.170409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.170449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.170730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.170771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.171085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.171098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.171399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.171412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.171720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.171733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.172034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.172047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.172334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.172373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.172711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.172760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.173115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.173129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.173416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.173457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.173696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.173759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.174099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.174139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.057 qpair failed and we were unable to recover it. 00:28:03.057 [2024-07-24 19:28:49.174500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.057 [2024-07-24 19:28:49.174541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.174812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.174858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.175134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.175175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.175466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.175506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.175859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.175900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.176186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.176226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.176536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.176576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.176889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.176929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.177162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.177174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.177450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.177489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.177769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.177809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.178173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.178213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.178502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.178542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.178904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.178946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.179263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.179302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.179670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.179711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.179993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.180033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.180252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.180291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.180583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.180622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.180905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.180947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.181277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.181311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.181604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.181645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.182020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.182061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.182347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.182387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.182677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.182728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.183016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.183056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.183385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.183426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.183790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.183831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.184204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.184216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.184495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.184507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.184808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.184850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.185081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.185121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.185508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.058 [2024-07-24 19:28:49.185548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.058 qpair failed and we were unable to recover it. 00:28:03.058 [2024-07-24 19:28:49.185831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.185874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.186216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.186255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.186639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.186678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.187026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.187066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.187400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.187438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.187795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.187837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.188189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.188229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.188576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.188616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.188896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.188944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.189269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.189310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.189668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.189709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.190077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.190117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.190400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.190412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.190577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.190590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.190753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.190766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.190988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.191028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.191397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.191438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.191696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.191738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.192037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.192077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.192400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.192440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.192816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.192858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.193156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.193196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.193548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.193588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.193900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.193942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.194226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.194263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.194606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.194645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.194940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.194981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.195317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.195357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.195658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.195670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.195879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.195891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.196199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.196240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.196602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.196642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.197017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.197058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.197329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.197341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.197660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.197700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.059 qpair failed and we were unable to recover it. 00:28:03.059 [2024-07-24 19:28:49.198078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.059 [2024-07-24 19:28:49.198119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.198442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.198455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.198706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.198750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.199134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.199174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.199531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.199578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.199864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.199906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.200182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.200223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.200567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.200607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.200885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.200925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.201270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.201310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.201669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.201682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.201910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.201923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.202232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.202271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.202558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.202604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.202974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.203015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.203357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.203397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.203776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.203789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.204097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.204138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.204487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.204527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.204818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.204860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.205154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.205194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.205474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.205512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.205859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.205900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.206205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.206245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.206586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.206626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.206992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.207033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.207394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.207434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.207803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.207844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.208157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.208198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.208560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.208600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.208962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.209004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.209339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.209371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.209733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.209773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.210132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.060 [2024-07-24 19:28:49.210173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.060 qpair failed and we were unable to recover it. 00:28:03.060 [2024-07-24 19:28:49.210511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.210550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.210865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.210906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.211245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.211285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.211563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.211576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.211882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.211923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.212229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.212269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.212574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.212619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.212919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.212960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.213243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.213283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.213643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.213684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.214007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.214047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.214344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.214384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.214742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.214782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.215154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.215194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.215480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.215520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.215888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.215929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.216269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.216309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.216654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.216693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.217079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.217120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.217478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.217491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.217803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.217844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.218186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.218226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.218513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.218526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.218730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.218744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.218950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.218963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.219198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.219211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.219369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.219382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.219667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.219680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.220075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.220116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.220477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.220490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.220769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.220808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.221174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.221215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.221579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.221591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.221915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.221957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.222278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.061 [2024-07-24 19:28:49.222318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.061 qpair failed and we were unable to recover it. 00:28:03.061 [2024-07-24 19:28:49.222673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.222735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.223027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.223067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.223426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.223464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.223841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.223881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.224087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.224127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.224474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.224514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.224873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.224915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.225191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.225230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.225503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.225543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.225911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.225952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.226235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.226249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.226481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.226528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.226895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.226937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.227297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.227337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.227607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.227620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.227866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.227900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.228129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.228170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.228532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.228572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.228935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.228976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.229340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.229380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.229742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.229783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.230127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.230167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.230543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.230555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.230772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.230785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.231014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.231027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.231374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.231398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.231691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.231741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.232093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.232133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.232505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.232544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.232909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.232950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.233224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.233264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.233530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.062 [2024-07-24 19:28:49.233544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.062 qpair failed and we were unable to recover it. 00:28:03.062 [2024-07-24 19:28:49.233789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.233803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.234010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.234023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.234261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.234274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.234588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.234629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.234981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.235022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.235321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.235362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.235730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.235772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.236114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.236154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.236452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.236493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.236801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.236842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.237212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.237251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.237638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.237679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.238050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.238091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.238435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.238475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.238761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.238802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.239123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.239163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.239554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.239595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.239887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.239928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.240224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.240264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.240587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.240633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.241033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.241075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.241308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.241348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.241645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.241686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.242032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.242073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.242456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.242496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.242878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.242920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.243289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.243330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.243618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.243658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.244044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.244086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.244449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.244489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.244855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.244868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.245101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.063 [2024-07-24 19:28:49.245114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.063 qpair failed and we were unable to recover it. 00:28:03.063 [2024-07-24 19:28:49.245425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.245466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.245762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.245803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.246075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.246114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.246480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.246520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.246884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.246926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.247202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.247242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.247607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.247647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.247948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.247990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.248356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.248396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.248706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.248722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.249052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.249092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.249461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.249502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.249878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.249919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.250281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.250321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.250661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.250674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.250978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.251008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.251368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.251408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.251776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.251816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.252184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.252225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.252519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.252559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.252901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.252942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.253232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.253271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.253563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.253603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.253972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.254013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.254380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.254419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.254779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.254793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.255088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.255128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.255394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.255411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.255666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.255679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.255964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.255978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.256270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.256284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.256614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.256653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.256985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.257027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.257322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.257361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.064 [2024-07-24 19:28:49.257734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.064 [2024-07-24 19:28:49.257774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.064 qpair failed and we were unable to recover it. 00:28:03.065 [2024-07-24 19:28:49.258148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.065 [2024-07-24 19:28:49.258188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.065 qpair failed and we were unable to recover it. 00:28:03.065 [2024-07-24 19:28:49.258517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.065 [2024-07-24 19:28:49.258558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.065 qpair failed and we were unable to recover it. 00:28:03.065 [2024-07-24 19:28:49.258917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.065 [2024-07-24 19:28:49.258958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.065 qpair failed and we were unable to recover it. 00:28:03.065 [2024-07-24 19:28:49.259329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.065 [2024-07-24 19:28:49.259369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.065 qpair failed and we were unable to recover it. 00:28:03.065 [2024-07-24 19:28:49.259709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.065 [2024-07-24 19:28:49.259759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.065 qpair failed and we were unable to recover it. 00:28:03.065 [2024-07-24 19:28:49.260120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.065 [2024-07-24 19:28:49.260160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.065 qpair failed and we were unable to recover it. 00:28:03.065 [2024-07-24 19:28:49.260505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.065 [2024-07-24 19:28:49.260519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.065 qpair failed and we were unable to recover it. 00:28:03.065 [2024-07-24 19:28:49.260755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.065 [2024-07-24 19:28:49.260769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.065 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.261159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.261202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.261575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.261616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.261962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.262003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.262369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.262417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.262662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.262675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.262906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.262920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.263255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.263295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.263662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.263703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.264019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.264060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.264370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.264410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.264699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.264752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.265052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.265092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.265441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.265482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.265792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.265833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.266200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.266240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.266604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.266644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.266940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.266981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.267354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.267394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.267756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.267797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.268180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.268221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.268503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.268516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.268752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.268793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.269165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.269205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.269493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.269533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.269898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.269944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.270314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.270355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.270728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.270770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.271047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.271087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.271381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.271422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.271794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.271836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.272203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.272243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.272516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.272529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.337 qpair failed and we were unable to recover it. 00:28:03.337 [2024-07-24 19:28:49.272757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.337 [2024-07-24 19:28:49.272770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.273079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.273120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.273491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.273532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.273895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.273909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.274217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.274257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.274592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.274633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.275009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.275023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.275338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.275379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.275745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.275786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.276135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.276176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.276478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.276518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.276885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.276925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.277271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.277311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.277677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.277725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.278027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.278068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.278434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.278474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.278767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.278807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.279042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.279082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.279304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.279344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.279697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.279765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.280135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.280176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.280516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.280530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.280837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.280850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.281082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.281124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.281417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.281457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.281833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.281847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.282159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.282200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.282574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.282614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.282950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.282986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.283354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.283394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.283696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.283744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.284116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.284157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.284522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.284567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.284855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.284868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.285173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.338 [2024-07-24 19:28:49.285209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.338 qpair failed and we were unable to recover it. 00:28:03.338 [2024-07-24 19:28:49.285498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.285538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.285845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.285886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.286301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.286343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.286710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.286759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.287073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.287114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.287485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.287525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.287893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.287935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.288210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.288250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.288600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.288640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.289034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.289076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.289388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.289428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.289801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.289842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.290142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.290182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.290515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.290555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.290847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.290888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.291254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.291294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.291640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.291680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.292070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.292111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.292479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.292520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.292865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.292905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.293290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.293331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.293692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.293706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.294028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.294070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.294416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.294456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.294843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.294885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.295230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.295270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.295579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.295592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.295907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.295948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.296277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.296318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.296677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.296741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.297039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.297080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.297397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.297437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.297805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.297847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.298137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.298177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.298546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.339 [2024-07-24 19:28:49.298586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.339 qpair failed and we were unable to recover it. 00:28:03.339 [2024-07-24 19:28:49.298866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.298880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.299115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.299129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.299354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.299399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.299697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.299710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.299952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.299966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.300266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.300279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.300599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.300638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.301032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.301074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.301445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.301486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.301859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.301902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.302139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.302179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.302559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.302600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.302969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.303022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.303406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.303445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.303674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.303725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.304026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.304067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.304445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.304485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.304856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.304870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.305186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.305228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.305598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.305637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.306017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.306058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.306385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.306425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.306801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.306843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.307239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.307279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.307649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.307703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.308039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.308081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.308395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.308436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.308809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.308851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.309226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.309267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.309561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.309574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.309892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.309934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.310234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.310276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.310656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.310696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.310989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.311030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.311399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.340 [2024-07-24 19:28:49.311439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.340 qpair failed and we were unable to recover it. 00:28:03.340 [2024-07-24 19:28:49.311811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.311852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.312205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.312245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.312631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.312672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.312938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.312952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.313205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.313251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.313489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.313529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.313900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.313942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.314314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.314359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.314738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.314780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.315037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.315051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.315366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.315406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.315784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.315825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.316176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.316217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.316519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.316532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.316871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.316913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.317292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.317332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.317710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.317770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.318145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.318186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.318557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.318597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.318920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.318962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.319337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.319379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.319682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.319733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.320085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.320126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.320496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.320537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.320873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.320908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.321284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.321325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.321701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.321753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.322070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.322111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.322413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.322453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.322807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.322848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.323211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.323252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.341 [2024-07-24 19:28:49.323565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.341 [2024-07-24 19:28:49.323606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.341 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.323887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.323901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.324149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.324189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.324481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.324522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.324909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.324951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.325185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.325226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.325528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.325573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.325836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.325859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.326085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.326099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.326423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.326465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.326761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.326803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.327182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.327223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.327521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.327562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.327932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.327946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.328267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.328308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.328677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.328736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.329113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.329166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.329469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.329511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.329791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.329806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.330121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.330161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.330536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.330576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.330954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.330996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.331298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.331339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.331708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.331775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.332151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.332191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.332488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.332529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.332846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.332888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.333265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.333306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.333634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.333678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.333896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.333912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.334230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.334272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.334523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.342 [2024-07-24 19:28:49.334564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.342 qpair failed and we were unable to recover it. 00:28:03.342 [2024-07-24 19:28:49.334943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.334984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.335303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.335344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.335727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.335769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.336125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.336140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.336460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.336500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.336876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.336917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.337192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.337207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.337521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.337535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.337802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.337845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.338230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.338271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.338553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.338594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.338976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.339018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.339392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.339433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.339812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.339854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.340233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.340275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.340620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.340634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.340886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.340927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.341351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.341392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.341608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.341648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.342014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.342056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.342429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.342470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.342750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.342791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.343142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.343157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.343330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.343344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.343663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.343710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.344040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.344081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.344387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.344428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.344730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.344772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.345153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.345194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.345571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.345612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.345994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.346036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.346340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.346381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.346698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.346712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.343 [2024-07-24 19:28:49.347067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.343 [2024-07-24 19:28:49.347110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.343 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.347417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.347458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.347813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.347855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.348244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.348285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.348590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.348632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.349024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.349066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.349385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.349427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.349805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.349847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.350216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.350257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.350632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.350672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.350976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.351017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.351392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.351434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.351786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.351828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.352212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.352226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.352504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.352545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.352859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.352900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.353129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.353171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.353572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.353613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.353965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.353997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.354394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.354435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.354736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.354778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.355010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.355051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.355440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.355481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.355847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.355876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.356264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.356304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.356680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.356730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.357018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.357059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.357449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.357489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.357849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.357898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.358211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.358225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.358546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.358586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.358880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.358927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.359234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.359275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.359566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.359607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.359834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.359849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.344 [2024-07-24 19:28:49.360123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.344 [2024-07-24 19:28:49.360163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.344 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.360543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.360583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.360883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.360925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.361287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.361328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.361729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.361771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.362146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.362188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.362562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.362603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.362977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.363019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.363299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.363341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.363727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.363769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.364097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.364139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.364535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.364577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.364898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.364948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.365272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.365314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.365619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.365660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.365983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.366026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.366345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.366386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.366760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.366800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.367089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.367129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.367503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.367544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.367895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.367936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.368253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.368294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.368651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.368692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.368991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.369007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.369242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.369256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.369531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.369546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.369843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.369857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.370130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.370145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.370453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.370492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.370735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.370749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.370971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.370985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.371346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.371360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.371679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.371694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.372009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.372024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.372378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.372392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.372569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.345 [2024-07-24 19:28:49.372584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.345 qpair failed and we were unable to recover it. 00:28:03.345 [2024-07-24 19:28:49.372895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.372912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.373159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.373199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.373598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.373639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.373951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.373965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.374186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.374201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.374513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.374528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.374742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.374757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.375075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.375090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.375348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.375390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.375694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.375746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.376020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.376035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.376353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.376368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.376627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.376641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.376977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.376992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.377214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.377228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.377467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.377507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.377761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.377776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.378072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.378086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.378372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.378386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.378695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.378710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.378938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.378954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.379217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.379232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.379570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.379585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.379740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.379756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.379995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.380009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.380324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.380338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.380595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.380609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.380871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.380886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.381174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.381189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.381429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.381454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.381640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.381654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.381973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.381988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.382135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.382148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.346 qpair failed and we were unable to recover it. 00:28:03.346 [2024-07-24 19:28:49.382442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.346 [2024-07-24 19:28:49.382457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.382791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.382822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.383047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.383062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.383375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.383389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.383645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.383660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.384007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.384039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.384355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.384391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.384724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.384772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.385086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.385101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.385397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.385412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.385646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.385661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.385911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.385926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.386245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.386260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.386599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.386631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.386879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.386894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.387043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.387058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.387300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.387341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.387625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.387667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.388068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.388082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.388408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.388422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.388735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.388750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.389092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.389107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.389358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.389399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.389695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.389765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.390139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.390153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.390400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.390415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.390671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.390686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.390907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.390922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.391166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.391180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.391402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.391416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.391664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.347 [2024-07-24 19:28:49.391679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.347 qpair failed and we were unable to recover it. 00:28:03.347 [2024-07-24 19:28:49.391932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.391948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.392250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.392292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.392599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.392640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.393058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.393108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.393409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.393456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.393677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.393729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.394075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.394091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.394362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.394377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.394638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.394653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.394904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.394919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.395161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.395176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.395505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.395546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.395929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.395971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.396328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.396381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.396635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.396676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.397045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.397087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.397384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.397424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.397823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.397838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.398155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.398170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.398510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.398525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.398852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.398866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.399223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.399263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.399570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.399610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.399983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.399997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.400148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.400164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.400444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.400458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.400751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.400765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.401003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.401017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.402326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.402359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.402695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.402712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.402896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.402913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.403275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.403291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.403539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.403553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.403812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.403827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.404082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.348 [2024-07-24 19:28:49.404097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.348 qpair failed and we were unable to recover it. 00:28:03.348 [2024-07-24 19:28:49.404421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.404463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.404844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.404887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.405197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.405243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.405571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.405612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.405880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.405895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.406168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.406186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.406513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.406555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.406899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.406943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.407258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.407306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.407649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.407691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.408007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.408023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.408294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.408308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.408613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.408627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.408967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.409009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.409256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.409297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.409683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.409734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.410097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.410139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.410509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.410549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.410886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.410928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.411318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.411359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.411664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.411705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.412022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.412064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.412325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.412366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.412676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.412728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.413037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.413051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.413218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.413258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.413554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.413595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.413974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.414016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.414371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.414413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.414711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.414730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.415041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.415055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.415313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.415327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.415651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.415693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.416016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.416058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.416407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.349 [2024-07-24 19:28:49.416448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.349 qpair failed and we were unable to recover it. 00:28:03.349 [2024-07-24 19:28:49.416844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.416860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.417172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.417188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.417446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.417487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.417849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.417894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.418132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.418146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.418395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.418410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.418699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.418718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.419036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.419051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.419317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.419331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.419678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.419730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.419965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.420006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.420233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.420275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.420614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.420655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.421055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.421078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.421439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.421452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.421739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.421754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.422050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.422091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.422325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.422366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.422654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.422695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.422947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.422962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.423204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.423218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.423549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.423564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.423822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.423837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.424078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.424092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.424279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.424293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.424528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.424542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.424759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.424773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.425025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.425039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.425275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.425288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.425600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.425614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.425859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.425874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.426063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.426077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.426293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.426307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.426548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.426562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.426852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.426867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.350 qpair failed and we were unable to recover it. 00:28:03.350 [2024-07-24 19:28:49.427127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.350 [2024-07-24 19:28:49.427142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.427419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.427433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.427711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.427729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.427979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.427993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.428293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.428338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.428692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.428742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.429027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.429068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.429285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.429325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.429696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.429749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.430077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.430093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.430333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.430347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.430589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.430603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.430825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.430840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.432198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.432229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.432646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.432662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.432918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.432934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.433252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.433292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.433664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.433704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.434088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.434138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.434468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.434482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.434725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.434740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.435008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.435022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.435326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.435366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.435687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.435829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.436146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.436161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.436397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.436413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.436704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.436724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.436965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.436980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.437242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.437256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.437575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.437589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.438427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.438471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.438727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.438742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.439006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.351 [2024-07-24 19:28:49.439038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.351 qpair failed and we were unable to recover it. 00:28:03.351 [2024-07-24 19:28:49.439393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.439436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.439806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.439848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.440215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.440230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.440503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.440518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.440791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.440833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.441219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.441260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.441563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.441603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.441915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.441957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.442326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.442340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.442526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.442541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.442847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.442861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.443129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.443170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.443528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.443570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.443918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.443933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.444123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.444138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.444404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.444419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.444765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.444780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.445093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.445107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.445366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.445380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.445608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.445624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.445948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.445962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.446233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.446247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.446425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.446440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.446795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.446809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.447125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.447139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.447429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.447477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.447835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.447877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.448134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.448148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.448388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.448402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.448569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.448583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.448893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.448909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.449221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.352 [2024-07-24 19:28:49.449236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.352 qpair failed and we were unable to recover it. 00:28:03.352 [2024-07-24 19:28:49.449557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.449597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.449943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.449985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.450257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.450297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.450710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.450778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.451084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.451124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.451453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.451467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.451797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.451812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.452127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.452142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.452409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.452450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.452796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.452840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.453180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.453195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.453502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.453517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.453733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.453749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.453977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.453991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.454306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.454346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.454692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.454743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.455049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.455063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.455300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.455314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.455557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.455570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.455917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.455932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.456170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.456184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.456412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.456426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.456606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.456620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.456904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.456919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.457182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.457196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.457511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.457525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.457761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.457776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.458011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.458025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.458185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.458199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.458436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.458450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.458630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.458684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.459002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.459043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.459324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.459365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.353 qpair failed and we were unable to recover it. 00:28:03.353 [2024-07-24 19:28:49.459748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.353 [2024-07-24 19:28:49.459794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.460068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.460082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.460328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.460370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.460749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.460791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.461137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.461179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.461514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.461555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.461837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.461879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.462182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.462223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.462601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.462643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.463015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.463029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.463323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.463363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.463731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.463773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.464132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.464172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.464415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.464456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.464781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.464823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.465133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.465173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.465524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.465566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.465895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.465935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.466170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.466184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.466441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.466482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.466794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.466842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.467059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.467073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.467373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.467413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.467709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.467771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.468116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.468157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.468391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.468431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.468749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.468791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.469168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.469209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.469599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.469640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.470023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.470071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.470308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-24 19:28:49.470321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-24 19:28:49.470575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.470629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.470931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.470973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.471269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.471309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.471703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.471753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.472047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.472087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.472367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.472408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.472698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.472750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.473059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.473108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.473415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.473456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.473738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.473786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.474134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.474175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.474534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.474575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.474899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.474915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.475163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.475204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.475529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.475569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.475897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.475939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.476156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.476197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.476567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.476607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.476966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.476980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.477294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.477334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.477624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.477664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.478037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.478051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.478345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.478386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.478759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.478801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.479092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.479106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.479329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.479370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.479757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.479798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.480139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.480154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.480446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.480460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.480784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.480825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.481199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.481241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.481618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.481658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.482037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.482079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-24 19:28:49.482414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-24 19:28:49.482427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.482665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.482680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.482986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.483038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.483285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.483325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.483700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.483770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.484132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.484146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.484368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.484383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.484600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.484613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.484870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.484916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.485208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.485249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.485531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.485571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.485866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.485908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.486244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.486284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.486601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.486642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.486939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.486981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.487350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.487390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.487756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.487803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.488185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.488226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.488538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.488578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.488949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.488990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.489330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.489345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.489598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.489611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.489830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.489844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.490069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.490082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.490382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.490423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.490656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.490697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.491014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.491027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.491263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.491276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.491539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.491580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.491807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.491851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.492134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.492175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.492485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.492525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.492835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.492876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.493179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.493219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-24 19:28:49.493558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-24 19:28:49.493572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.493896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.493937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.494213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.494254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.494550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.494591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.494962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.494976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.495301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.495342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.495701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.495757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.496059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.496100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.496439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.496453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.496697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.496741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.497003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.497044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.497392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.497433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.497731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.497772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.498047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.498061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.498276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.498290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.498591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.498605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.498884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.498926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.499201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.499241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.499627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.499667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.499976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.500016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.500324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.500337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.500671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.500712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.501037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.501084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.501441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.501455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.501705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.501751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.502034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.502075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.502442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.502482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.502851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.502892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.503142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-24 19:28:49.503183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-24 19:28:49.503509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.503551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.503828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.503869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.504185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.504226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.504541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.504582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.504923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.504964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.505287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.505328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.505698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.505766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.505989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.506030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.506255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.506295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.506657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.506697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.507022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.507053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.507299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.507341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.507735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.507777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.508093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.508139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.508474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.508516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.508955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.508996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.509273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.509314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.509690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.509758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.510062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.510102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.510325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.510338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.510510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.510524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.510836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.510877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.511097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.511111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.511374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.511415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.511773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.511814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.512124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.512164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.512478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.512520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.512862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.512904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.513224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.513264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.513569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.513609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.513985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.514026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.514319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.514360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.514737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.514779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.515122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.515174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.515561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-24 19:28:49.515602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-24 19:28:49.515929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.515972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.516339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.516353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.516663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.516676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.516926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.516968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.517264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.517304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.517611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.517652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.517975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.518017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.518389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.518430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.518797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.518839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.519190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.519231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.519468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.519509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.519956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.519971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.520205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.520219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.520472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.520513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.520861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.520903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.521286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.521328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.521561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.521603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.522012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.522055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.522370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.522411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.522779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.522821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.523116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.523131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.523474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.523515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.523763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.523805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.524179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.524219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.524541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.524581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.524942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.524987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.525242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.525282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.525672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.525725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.526047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.526088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.526491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.526531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.526826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.526868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.527171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.527213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.527579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.527621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.527929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.527972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.528218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-24 19:28:49.528260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-24 19:28:49.528648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.528689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.528996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.529038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.529355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.529369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.529598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.529645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.529987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.530028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.530248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.530289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.531162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.531186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.532261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.532290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.532613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.532656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.533869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.533898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.534119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.534133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.534479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.534520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.534826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.534869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.535104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.535117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.535362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.535374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.535685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.535699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.535945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.535960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.536232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.536273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.536597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.536638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.536943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.536986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.537316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.537357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.537733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.537775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.538025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.538065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.538370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.538385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.538640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.538681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.539022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.539067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.539344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.539359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.539654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.539695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.540044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.540086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.540392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.540408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.540645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.540661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.540914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.540929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.541162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.541177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.541409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.541426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.541593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-24 19:28:49.541609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-24 19:28:49.541867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.541882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.542061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.542075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.542344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.542384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.542736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.542779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.543077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.543119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.543497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.543538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.543921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.543964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.544176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.544217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.544653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.544700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.544975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.545016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.545296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.545346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.545580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.545595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.545924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.545966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.546288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.546329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.546653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.546667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.546916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.546930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.547218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.547255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.547660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.547702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.548099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.548141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.548531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.548571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.548937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.548979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.549337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.549352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.549593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.549607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.549910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.549926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.550105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.550121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.550314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.550355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.550660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.550701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.551000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.551042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.551374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.551415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.551645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.551686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.552000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.552041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.552291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.552332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.552703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.552753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.553074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.553115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-24 19:28:49.553519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-24 19:28:49.553560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.554087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.554174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.554577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.554623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.554987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.555033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.555332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.555351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.555608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.555655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.555925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.555968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.556259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.556301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.556604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.556646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.557037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.557078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.557311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.557352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.557632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.557674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.557984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.558068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.558341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.558358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.558637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.558668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.558905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.558948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.559351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.559392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.559679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.559731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.560113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.560127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.560307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.560346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.560662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.560703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.561022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.561063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.561374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.561415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.561756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.561799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.562055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.562096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.562457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.562498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.562826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.562869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-24 19:28:49.563245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-24 19:28:49.563286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.635 qpair failed and we were unable to recover it. 00:28:03.635 [2024-07-24 19:28:49.563589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.563617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.563867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.563908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.564259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.564301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.565587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.565616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.565915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.565932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.566129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.566171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.566508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.566549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.566918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.566960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.567336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.567377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.567754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.567795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.568171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.568212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.568447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.568488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.568870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.568912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.569223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.569272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.569647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.569688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.570036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.570077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.570494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.570535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.570930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.570973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.571320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.571361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.571755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.571798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.572092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.572133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.572440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.572481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.572856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.572898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.573225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.573265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.573643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.573685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.574000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.574042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.574399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.574439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.574757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.574799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.575121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.575161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.575539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.575580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.575895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.575938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.576314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.576355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.576706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.576758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.577077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.577118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.636 [2024-07-24 19:28:49.577426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.636 [2024-07-24 19:28:49.577467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.636 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.577763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.577806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.578172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.578213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.578533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.578575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.578867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.578910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.579223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.579263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.579611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.579653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.580038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.580080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.580439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.580452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.580695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.580709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.580987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.581031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.581400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.581442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.581839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.581882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.582141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.582182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.582552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.582592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.582971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.583013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.583287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.583303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.583552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.583593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.583979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.584018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.584344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.584390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.584758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.584800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.585124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.585138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.585487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.585529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.585882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.585924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.586267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.586308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.586681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.586738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.587070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.587111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.587488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.587529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.587856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.587899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.588205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.588246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.588565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.588580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.588908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.588924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.589191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.589232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.589541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.589582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.589901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.589943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.637 [2024-07-24 19:28:49.590171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.637 [2024-07-24 19:28:49.590211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.637 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.590549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.590563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.590913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.590955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.591200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.591242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.591500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.591540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.591843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.591885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.592193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.592234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.592600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.592641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.592987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.593001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.593319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.593360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.593668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.593709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.593973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.594015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.594305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.594345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.594630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.594671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.594989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.595074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.595457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.595540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.595866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.595914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.596296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.596339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.596646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.596688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.597055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.597098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.597491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.597532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.597926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.597972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.598210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.598250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.598637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.598679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.598992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.599042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.599341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.599360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.599670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.599712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.600008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.600049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.600348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.600389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.600765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.600808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.601036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.601078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.601427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.601469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.601818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.601860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.602225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.602267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.638 [2024-07-24 19:28:49.602547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.638 [2024-07-24 19:28:49.602566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.638 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.602805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.602823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.603093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.603145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.603368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.603387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.603641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.603660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.603907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.603926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.604098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.604117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.604404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.604445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.604738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.604779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.605169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.605211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.605496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.605538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.605889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.605934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.606188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.606230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.606607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.606649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.606965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.607007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.607326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.607367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.607740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.607782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.608132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.608179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.608489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.608531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.608811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.608852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.609200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.609242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.609590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.609631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.609952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.609994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.610219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.610271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.610445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.610464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.610796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.610838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.611071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.611113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.611404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.611445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.611761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.611803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.612105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.612147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.612407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.612448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.612824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.612866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.613191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.613234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.613613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.613654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.613937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.613980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.639 [2024-07-24 19:28:49.614301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.639 [2024-07-24 19:28:49.614342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.639 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.614644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.614685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.615008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.615050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.615359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.615401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.615776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.615818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.616103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.616145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.616385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.616426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.616812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.616853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.617162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.617203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.617577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.617619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.617927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.617968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.618277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.618317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.618695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.618719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.619041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.619060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.619357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.619376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.619555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.619573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.619916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.619935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.620127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.620145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.620449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.620491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.620786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.620829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.621129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.621169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.621581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.621600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.621892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.621915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.622169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.622187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.622530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.622571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.622950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.622992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.623286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.623327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.623620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.623661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.624003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.624045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.624373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.624392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.640 [2024-07-24 19:28:49.624639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.640 [2024-07-24 19:28:49.624659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.640 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.624860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.624902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.625258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.625301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.625609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.625651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.625992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.626034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.626367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.626386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.626727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.626747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.627048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.627067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.627252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.627294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.627658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.627699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.627997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.628039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.628295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.628336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.628739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.628759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.628996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.629015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.629245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.629264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.629652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.629692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.629982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.630023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.630259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.630300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.630588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.630608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.630853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.630873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.631142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.631161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.631374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.631393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.631626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.631667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.632001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.632043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.632345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.632387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.632762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.632781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.633025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.633045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.633321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.633340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.633640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.633660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.633932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.633950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.634140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.634159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.634325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.634344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.634604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.634628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.641 [2024-07-24 19:28:49.634960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.641 [2024-07-24 19:28:49.634979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.641 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.635234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.635254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.635527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.635546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.635797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.635816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.637147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.637186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.637475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.637497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.637800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.637830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.638079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.638098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.638333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.638352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.638616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.638658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.638983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.639026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.639265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.639306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.639675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.639696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.639975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.639995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.640321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.640340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.640640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.640659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.640961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.640980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.641235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.641254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.641449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.641468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.641713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.641737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.641922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.641941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.642208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.642226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.642493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.642512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.642757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.642776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.643052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.643071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.643345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.643365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.643632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.643651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.643862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.643881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.644134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.644152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.644452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.644472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.644784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.644803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.645060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.645080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.645343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.645362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.645599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.645618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.645799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.645818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.646142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.642 [2024-07-24 19:28:49.646161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.642 qpair failed and we were unable to recover it. 00:28:03.642 [2024-07-24 19:28:49.646429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.646448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.646742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.646761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.647087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.647106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.647375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.647396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.647744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.647764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.648015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.648034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.648291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.648311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.648546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.648564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.648910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.648929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.649260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.649278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.649536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.649555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.649721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.649739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.650011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.650030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.650293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.650312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.650628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.650646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.650905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.650924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.651181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.651200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.651527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.651545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.651789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.651809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.652051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.652069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.652227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.652246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.652441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.652460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.652786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.652805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.653002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.653021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.653260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.653279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.653523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.653542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.653865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.653884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.654190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.654209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.654443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.654462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.654689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.654707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.655058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.655103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.655453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.655495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.655740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.655763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.655950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.655969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.656288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.643 [2024-07-24 19:28:49.656307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.643 qpair failed and we were unable to recover it. 00:28:03.643 [2024-07-24 19:28:49.656569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.656587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.656846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.656865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.657199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.657217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.657512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.657531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.657725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.657744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.658055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.658073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.658393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.658411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.658706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.658730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.658971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.658994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.659248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.659266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.659594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.659612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.659855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.659874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.660112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.660130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.660471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.660489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.660753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.660772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.661027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.661045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.661282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.661300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.661567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.661585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.661929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.661948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.662272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.662290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.662525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.662544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.662846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.662865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.663111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.663130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.663475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.663494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.663817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.663836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.664031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.664050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.664365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.664383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.664635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.664654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.664895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.664914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.665140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.665159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.665423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.665442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.665769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.665787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.666115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.666134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.666361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.666379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.666607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.644 [2024-07-24 19:28:49.666625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:03.644 qpair failed and we were unable to recover it. 00:28:03.644 [2024-07-24 19:28:49.666973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.666999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.667197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.667216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.667556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.667574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.667892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.667911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.668156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.668175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.668494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.668512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.668695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.668719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.669048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.669067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.669363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.669381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.669552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.669570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.669806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.669825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.670148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.670166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.670418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.670437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.670695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.670712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.670952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.670971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.671292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.671311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.671653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.671671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.671966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.671985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.672321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.672340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.672511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.672530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.672758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.672777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.673020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.673039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.673229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.673247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.673600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.673618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.673844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.673863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.674108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.674127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.674448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.674466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.674723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.674744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.675048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.675067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.675329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.675347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.675665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.645 [2024-07-24 19:28:49.675683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.645 qpair failed and we were unable to recover it. 00:28:03.645 [2024-07-24 19:28:49.676015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.676033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.676256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.676274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.676508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.676526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.676865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.676884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.677180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.677199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.677530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.677548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.677888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.677907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.678159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.678177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.678435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.678453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.678770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.678788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.679019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.679037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.679291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.679309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.679629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.679647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.679939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.679957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.680266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.680284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.680528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.680546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.680840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.680858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.681174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.681192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.681541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.681558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.681853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.681872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.682051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.682070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.682386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.682405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.682745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.682764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.682990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.683011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.683271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.683289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.683642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.683660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.683835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.683852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.684160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.684178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.684456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.684474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.684770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.684788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.684976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.684994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.685239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.685258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.685417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.685435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.685725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.685744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.646 [2024-07-24 19:28:49.685966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.646 [2024-07-24 19:28:49.685985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.646 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.686205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.686224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.686468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.686486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.686784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.686802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.686970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.686987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.687251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.687269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.687561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.687580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.687826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.687845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.688164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.688182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.688507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.688524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.688837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.688855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.689088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.689106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.689398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.689416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.689592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.689611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.689951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.689970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.690282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.690300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.690543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.690563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.690863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.690882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.691114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.691133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.691318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.691337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.691560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.691578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.691846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.691864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.692056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.692074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.692415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.692433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.692678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.692697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.692962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.692981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.693231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.693249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.693503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.693521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.693765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.693783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.694038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.694057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.694272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.694295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.694592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.694610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.694850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.694871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.695120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.695138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.695481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.695523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.695834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.647 [2024-07-24 19:28:49.695875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.647 qpair failed and we were unable to recover it. 00:28:03.647 [2024-07-24 19:28:49.696111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.696151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.696430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.696470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.696757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.696798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.697007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.697049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.697395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.697435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.697738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.697780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.698144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.698185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.698525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.698573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.698864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.698906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.699138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.699179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.699529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.699570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.699938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.699981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.700307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.700348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.700661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.700679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.700892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.700911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.701151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.701169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.701403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.701421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.701667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.701704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.702035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.702076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.702392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.702409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.702755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.702798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.703151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.703191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.703598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.703638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.703870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.703911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.704254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.704295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.704608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.704649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.704995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.705014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.705269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.705310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.705654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.705695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.705956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.705974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.706297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.706338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.706610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.706651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.706955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.706996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.707381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.707423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.707746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.648 [2024-07-24 19:28:49.707789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.648 qpair failed and we were unable to recover it. 00:28:03.648 [2024-07-24 19:28:49.708086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.708127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.708514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.708554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.708917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.708958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.709180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.709221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.709536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.709577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.709797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.709819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.710117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.710158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.710382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.710422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.710730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.710772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.711117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.711158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.711534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.711575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.711949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.711967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.712250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.712295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.712698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.712750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.712966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.713007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.713246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.713287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.713558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.713577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.713840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.713883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.714181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.714221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.714585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.714626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.714918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.714960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.715231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.715272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.715624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.715676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.716000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.716019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.716311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.716329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.716693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.716745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.717028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.717069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.717376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.717417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.717785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.717828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.718116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.718157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.718483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.718524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.718888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.718929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.719230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.719271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.719570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.719588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.719913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.719954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.649 qpair failed and we were unable to recover it. 00:28:03.649 [2024-07-24 19:28:49.720300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.649 [2024-07-24 19:28:49.720342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.720662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.720703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.721067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.721107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.721422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.721462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.721775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.721817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.722126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.722167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.722515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.722555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.722918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.722959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.723182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.723224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.723535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.723576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.723858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.723876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.724121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.724155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.724418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.724437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.724676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.724694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.724952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.724970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.725271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.725312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.725604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.725645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.726015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.726062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.726363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.726404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.726664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.726683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.726926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.726944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.727113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.727132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.727331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.727373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.727742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.727787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.728053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.728072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.728332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.728374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.728690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.728757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.729056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.729097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.729392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.729432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.729810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.729852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.650 [2024-07-24 19:28:49.730224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.650 [2024-07-24 19:28:49.730264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.650 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.730639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.730681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.730990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.731031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.731331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.731371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.731688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.731736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.732045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.732063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.732365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.732406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.732760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.732802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.733104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.733145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.733505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.733546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.733823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.733864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.734093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.734134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.734364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.734405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.734771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.734804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.735100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.735118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.735295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.735313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.735651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.735692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.735997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.736038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.736384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.736424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.736790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.736832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.737128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.737169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.737417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.737458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.737713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.737735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.738040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.738081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.738438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.738479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.738840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.738882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.739256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.739297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.739593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.739644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.739995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.740013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.740187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.740205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.740451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.740470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.740671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.740711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.741017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.741058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.741354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.741395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.741686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.741736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.742067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.651 [2024-07-24 19:28:49.742109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.651 qpair failed and we were unable to recover it. 00:28:03.651 [2024-07-24 19:28:49.742483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.742525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.742817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.742860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.743136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.743176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.743544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.743585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.743933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.743974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.744350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.744392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.744692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.744742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.745052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.745070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.745242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.745261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.745511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.745551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.745861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.745902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.746185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.746227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.746532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.746573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.746881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.746923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.747272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.747313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.747633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.747652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.747905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.747944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.748329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.748370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.748749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.748768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.749087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.749106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.749359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.749400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.749795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.749814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.750064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.750083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.750387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.750427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.750788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.750829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.751191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.751209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.751517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.751559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.751860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.751902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.752215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.752256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.752549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.752590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.752912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.752931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.753172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.753193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.753491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.753510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.753875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.753916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.652 [2024-07-24 19:28:49.754213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.652 [2024-07-24 19:28:49.754254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.652 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.754590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.754631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.755021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.755064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.755368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.755410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.755772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.755791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.756116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.756157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.756545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.756587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.756899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.756919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.757274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.757315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.757558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.757600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.757912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.757930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.758171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.758189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.758423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.758441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.758683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.758701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.758995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.759037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.759408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.759449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.759816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.759836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.760112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.760131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.760482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.760523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.760828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.760870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.761137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.761156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.761426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.761467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.761782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.761824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.762210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.762252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.762604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.762646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.763029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.763072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.763363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.763403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.763685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.763704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.763960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.764001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.764392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.764434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.764821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.764863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.765165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.765207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.765487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.765527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.765841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.765884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.766202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.766243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.766652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.653 [2024-07-24 19:28:49.766693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.653 qpair failed and we were unable to recover it. 00:28:03.653 [2024-07-24 19:28:49.767074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.767118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.767416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.767463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.767837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.767879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.768252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.768293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.768593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.768634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.769002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.769044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.769349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.769390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.769670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.769712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.770116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.770157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.770474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.770515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.770867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.770910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.771292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.771333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.771616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.771635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.771950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.771968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.772310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.772329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.772561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.772580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.772794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.772836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.773238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.773280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.773671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.773726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.774019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.774059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.774438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.774479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.774873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.774915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.775212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.775253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.775659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.775700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.776053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.776072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.776364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.776405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.776671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.776711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.777113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.777155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.777524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.777570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.777866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.777907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.778133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.778174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.778414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.778457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.778749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.778791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.779168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.779210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.779483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.654 [2024-07-24 19:28:49.779525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.654 qpair failed and we were unable to recover it. 00:28:03.654 [2024-07-24 19:28:49.779878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.779919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.780292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.780333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.780689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.780736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.781019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.781060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.781441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.781482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.781854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.781895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.782187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.782228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.782628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.782670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.782954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.782973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.783239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.783258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.783629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.783671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.785154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.785187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.785564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.785586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.785919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.785963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.786265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.786308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.786660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.786701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.787064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.787106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.787465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.787507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.787854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.787874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.788202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.788243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.788571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.788614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.788871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.788914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.789240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.789283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.789637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.789679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.789996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.790039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.790335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.790377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.790672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.790713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.791070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.791112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.791508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.791549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.791896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.655 [2024-07-24 19:28:49.791939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-07-24 19:28:49.792306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.792346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.792723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.792766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.793140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.793181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.793599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.793646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.794016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.794036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.794318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.794359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.794737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.794779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.795137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.795179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.795543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.795584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.795821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.795863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.796229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.796248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.796508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.796558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.796948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.796989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.797294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.797336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.797680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.797726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.798074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.798115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.798488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.798529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.798832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.798875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.799113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.799132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.799375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.799393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.799705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.799758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.800058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.800100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.800439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.800496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.800829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.800871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.801122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.801163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.801492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.801532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.801887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.801929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.802253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.802294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.802618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.802660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.803077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.803121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.803431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.803472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.803853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.803895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.804250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.804292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.804671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.804712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.805018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.656 [2024-07-24 19:28:49.805061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-07-24 19:28:49.805440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.805482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.805772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.805791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.806047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.806093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.806408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.806450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.806827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.806868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.807187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.807230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.807550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.807593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.807890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.807909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.808171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.808218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.808525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.808567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.808958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.809000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.809351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.809392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.809747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.809787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.810094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.810136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.810519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.810561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.810960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.811002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.811354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.811395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.811783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.811825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.812134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.812176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.812499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.812542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.812932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.812974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.813301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.813342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.813580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.813599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.813960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.814002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.814432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.814474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.814866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.814908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.815209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.815250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.815623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.815665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.815956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.815976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.816224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.816243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.816495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.816514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.816763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.816806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.817130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.817173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.817489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.817531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.817908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.657 [2024-07-24 19:28:49.817950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-07-24 19:28:49.818264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.818305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.818698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.818749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.819055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.819096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.819492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.819533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.819869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.819911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.820328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.820370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.820747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.820789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.821126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.821144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.821480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.821521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.821844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.821886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.822266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.822307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.822605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.822648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.823015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.823060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.823399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.823446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.823817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.823836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.824165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.824184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.824444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.824484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.824917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.824963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.825271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.825313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.825674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.825725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.826029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.826048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.826298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.826317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.826630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.826671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.827003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.827045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.827452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.827494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.827743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.827785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.828156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.828175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.828516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.828535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.828802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.828844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.829153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.829195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.829571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.829611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.829961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.830002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.830357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.830399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.830789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.830831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.831139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.831181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.658 qpair failed and we were unable to recover it. 00:28:03.658 [2024-07-24 19:28:49.831530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.658 [2024-07-24 19:28:49.831573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.831992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.832033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.832387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.832429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.832654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.832696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.833077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.833097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.833413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.833455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.833741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.833783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.834078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.834119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.834379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.834420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.834796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.834838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.835090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.835132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.835500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.835542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.835899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.835919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.836108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.836149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.836512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.836554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.836910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.836929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.837133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.837152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.837457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.837477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.837720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.837742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.838071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.838114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.838520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.838562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.838850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.838893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.839191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.839232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.839475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.839517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.839872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.839913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.840146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.840188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.840489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.840531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.840956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.840999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.841269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.841288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.841725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.841767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.842152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.842194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.842611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.842653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.843015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.843058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.843444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.843487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.843857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.843877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.659 [2024-07-24 19:28:49.844139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.659 [2024-07-24 19:28:49.844181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.659 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.844416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.844457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.844785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.844827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.845134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.845176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.845552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.845593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.845907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.845967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.846331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.846373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.846758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.846800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.847122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.847162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.847497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.847539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.847820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.847840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.848134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.848175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.848566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.848608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.848921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.848941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.849183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.849202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.849456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.849500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.849863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.849906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.850286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.850335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.850712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.850764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.851148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.851190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.851523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.851564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.851804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.851824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.852061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.852104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.852408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.852455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.852771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.852812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.853120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.853162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.853489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.853531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.853889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.853932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.854268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.854310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.854690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.854755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.854965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.660 [2024-07-24 19:28:49.854985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.660 qpair failed and we were unable to recover it. 00:28:03.660 [2024-07-24 19:28:49.855212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.855231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.855468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.855488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.855723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.855743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.855992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.856011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.857630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.857667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.858039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.858084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.858503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.858546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.858897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.858947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.859255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.859297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.859599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.859640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.859906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.859949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.860203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.860223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.860523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.860564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.661 [2024-07-24 19:28:49.860897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.661 [2024-07-24 19:28:49.860940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.661 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.861293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.861335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.861648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.861691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.862003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.862045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.862426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.862468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.862773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.862815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.863177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.863197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.863549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.863569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.864697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.864751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.865045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.865089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.865357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.865399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.865793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.865835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.866145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.866186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.866495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.866537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.866928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.866947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.867230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.867250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.867584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.867626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.867885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.867927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.868287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.868306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.868574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.868597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.868940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.868982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.869288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.869330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.869654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.869696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.869972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.869992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.870263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.870282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.870545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.870564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.870833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.938 [2024-07-24 19:28:49.870852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.938 qpair failed and we were unable to recover it. 00:28:03.938 [2024-07-24 19:28:49.871199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.871218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.871488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.871507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.871773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.871815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.872119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.872160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.872518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.872559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.872844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.872864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.873220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.873240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.873510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.873530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.873709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.873766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.873996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.874015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.874337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.874356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.874610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.874628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.874965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.875008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.875384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.875426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.875769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.875812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.876132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.876151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.876427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.876446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.876720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.876739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.876991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.877010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.877378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.877465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.877923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.877949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.878185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.878204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.878531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.878550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.878883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.878903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.879211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.879253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.879642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.879682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.880006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.880025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.880284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.880302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.880610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.880652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.880971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.881014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.881332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.881351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.881672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.881691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.882007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.882028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.882293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.882311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.939 qpair failed and we were unable to recover it. 00:28:03.939 [2024-07-24 19:28:49.882642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.939 [2024-07-24 19:28:49.882683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.883022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.883064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.883348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.883389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.883766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.883809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.884129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.884148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.884466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.884484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.884744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.884762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.885009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.885027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.885302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.885322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.885629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.885649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.885959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.885978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.886244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.886262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.886594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.886636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.886959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.886978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.887187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.887207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.887571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.887591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.887899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.887919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.888114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.888133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.889783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.889823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.890229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.890274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.890650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.890700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.891040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.891082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.891498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.891539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.891847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.891867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.892169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.892188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.892493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.892513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.892818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.892837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.893141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.893160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.893430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.893449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.893681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.893700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.893902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.893944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.894182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.894223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.894582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.894624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.894946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.894990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.895316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.940 [2024-07-24 19:28:49.895335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.940 qpair failed and we were unable to recover it. 00:28:03.940 [2024-07-24 19:28:49.895677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.895696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.895911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.895931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.896271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.896312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.896663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.896704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.897020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.897042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.897322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.897341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.897665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.897684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.897986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.898005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.898361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.898380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.898706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.898757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.899111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.899151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.899530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.899572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.899931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.899975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.900301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.900320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.900571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.900590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.900928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.900970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.901283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.901332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.901633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.901674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.901999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.902041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.902286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.902306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.902582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.902602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.902871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.902892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.903082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.903101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.903362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.903381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.903560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.903579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.903834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.903854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.904099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.904118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.904393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.904412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.904768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.904788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.904985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.905005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.905249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.905268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.905533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.905555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.905757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.905777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.906106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.906124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.941 [2024-07-24 19:28:49.906315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.941 [2024-07-24 19:28:49.906334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.941 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.906635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.906654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.907002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.907022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.907253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.907272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.907539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.907559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.907812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.907855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.908149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.908189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.908619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.908661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.908990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.909009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.909332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.909352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.909597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.909615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.909942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.909961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.910309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.910351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.910637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.910679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.910949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.910986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.911243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.911262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.911440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.911459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.911712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.911738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.912045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.912064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.912316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.912334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.912632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.912651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.912944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.912987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.913352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.913393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.913712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.913765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.914149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.914168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.914424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.914443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.914692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.914710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.914919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.914960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.915272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.915315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.915604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.915646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.916057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.916100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.916324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.916343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.916587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.916605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.916814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.916833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.917037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.942 [2024-07-24 19:28:49.917056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.942 qpair failed and we were unable to recover it. 00:28:03.942 [2024-07-24 19:28:49.917309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.917328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.917704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.917760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.918142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.918184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.918500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.918543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.918882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.918900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.919068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.919083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.919317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.919332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.919668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.919709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.920021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.920062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.920249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.920264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.920492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.920508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.920778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.920794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.921037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.921052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.921228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.921242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.921497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.921512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.921829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.921844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.922035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.922086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.922503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.922544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.922898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.922941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.923191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.923206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.923508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.923522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.923773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.923788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.923955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.923970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.924217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.924231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.924562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.924603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.924962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.925004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.925249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.925264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.925657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.925672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.943 qpair failed and we were unable to recover it. 00:28:03.943 [2024-07-24 19:28:49.925916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.943 [2024-07-24 19:28:49.925931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.926157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.926172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.926424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.926439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.926610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.926625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.926858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.926873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.927120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.927135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.927377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.927391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.927719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.927735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.927931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.927946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.928191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.928232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.928601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.928642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.928995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.929037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.929413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.929427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.929749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.929763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.930065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.930106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.930471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.930513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.930870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.930911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.931221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.931236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.931522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.931537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.931776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.931792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.932041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.932081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.932367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.932407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.932765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.932807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.933107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.933123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.933313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.933328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.933652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.933667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.933964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.933978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.934183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.934224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.934626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.934673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.935039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.935107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.935382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.935404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.935663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.935683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.935961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.935982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.936166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.936185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.944 qpair failed and we were unable to recover it. 00:28:03.944 [2024-07-24 19:28:49.936502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.944 [2024-07-24 19:28:49.936521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.936897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.936917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.937178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.937220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.937538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.937580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.937887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.937928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.938182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.938223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.938457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.938476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.938748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.938767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.939096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.939115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.939307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.939326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.939665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.939707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.940000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.940042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.940350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.940370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.940548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.940568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.940820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.940840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.941120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.941138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.941369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.941389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.941592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.941634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.941991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.942034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.942440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.942458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.942725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.942745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.942927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.942947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.943154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.943173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.943444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.943463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.943731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.943752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.944010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.944029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.944230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.944249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.944498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.944516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.944758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.944777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.945031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.945050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.945240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.945259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.945637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.945657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.945912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.945931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.946182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.946201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.946506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.946547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.946842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.945 [2024-07-24 19:28:49.946928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.945 qpair failed and we were unable to recover it. 00:28:03.945 [2024-07-24 19:28:49.947185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.947232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.947533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.947552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.947751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.947771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.948083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.948126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.948477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.948519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.948901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.948944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.949274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.949294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.949645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.949664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.950021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.950041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.950250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.950268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.950637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.950678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.950984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.951027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.951338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.951389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.951764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.951807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.952110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.952129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.952466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.952485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.952769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.952811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.953113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.953155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.953448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.953490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.953869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.953911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.954208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.954250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.954612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.954654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.954991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.955033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.955352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.955372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.955627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.955646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.955877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.955898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.956233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.956275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.956685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.956738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.957091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.957133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.957454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.957496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.957858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.957878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.958162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.958205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.958516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.958557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.958919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.958961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.959268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.946 [2024-07-24 19:28:49.959310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.946 qpair failed and we were unable to recover it. 00:28:03.946 [2024-07-24 19:28:49.959662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.959704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.960035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.960077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.960435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.960476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.960854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.960896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.961250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.961269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.961679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.961731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.962109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.962151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.962504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.962524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.962864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.962884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.963152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.963195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.963550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.963592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.963893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.963935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.964290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.964332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.964637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.964679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.965027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.965068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.965488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.965530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.965889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.965932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.966239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.966282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.966586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.966605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.966952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.966994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.967278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.967297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.967635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.967677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.967993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.968036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.968253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.968272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.968627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.968668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.968988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.969030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.969417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.969459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.969787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.969828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.970215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.970256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.970550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.970591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.970903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.970947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.971261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.971302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.971659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.971701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.972030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.972072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.972444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.947 [2024-07-24 19:28:49.972485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.947 qpair failed and we were unable to recover it. 00:28:03.947 [2024-07-24 19:28:49.972839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.972881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.973192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.973233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.973652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.973694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.974014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.974055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.974485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.974526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.974772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.974815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.975066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.975108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.975405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.975425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.975667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.975687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.976029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.976077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.976381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.976423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.976666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.976707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.977093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.977134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.977495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.977536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.977893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.977936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.978259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.978301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.978676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.978728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.979087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.979129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.979427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.979468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.979773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.979816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.980177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.980218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.980624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.980665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.980964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.981007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.981234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.981277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.981648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.981689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.981988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.982029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.982386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.982427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.982757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.982800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.983061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.948 [2024-07-24 19:28:49.983104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.948 qpair failed and we were unable to recover it. 00:28:03.948 [2024-07-24 19:28:49.983346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.983388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.983762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.983805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.984086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.984127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.984432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.984451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.984762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.984803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.985146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.985188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.985582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.985624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.985932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.985973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.986299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.986341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.986724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.986768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.987066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.987107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.987468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.987510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.987915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.987958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.988205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.988246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.988628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.988670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.988978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.989022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.989245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.989264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.989566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.989608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.989848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.989890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.990192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.990238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.990468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.990490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.990753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.990773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.990978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.990998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.991311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.991352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.991706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.991761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.992058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.992100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.992392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.992433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.992762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.992804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.993043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.993085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.993343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.993385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.993773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.993814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.994114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.994134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.994464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.994506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.994887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.994929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.995169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.995210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.949 qpair failed and we were unable to recover it. 00:28:03.949 [2024-07-24 19:28:49.995590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.949 [2024-07-24 19:28:49.995632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:49.995939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:49.995981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:49.996337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:49.996378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:49.996674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:49.996737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:49.997053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:49.997095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:49.997408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:49.997450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:49.997746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:49.997788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:49.998097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:49.998140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:49.998390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:49.998432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:49.998727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:49.998770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:49.999081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:49.999122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:49.999499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:49.999541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:49.999847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:49.999889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.000211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.000230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.000575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.000594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.000923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.000943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.001194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.001214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.001494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.001514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.001843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.001863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.002158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.002177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.002437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.002457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.002787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.002806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.003000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.003020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.003299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.003319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.003655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.003675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.004020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.004045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.004398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.004417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.004702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.004735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.004982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.005001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.005202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.005221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.005569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.005588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.005902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.005923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.006201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.006220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.006550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.006603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.006988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.950 [2024-07-24 19:28:50.007129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.950 qpair failed and we were unable to recover it. 00:28:03.950 [2024-07-24 19:28:50.007652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.007761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.008159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.008186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.008454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.008474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.008777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.008796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.009072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.009091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.009285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.009304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.009657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.009676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.010003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.010022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.010277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.010296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.010562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.010581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.010885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.010905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.011236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.011255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.011515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.011534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.011871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.011891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.012089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.012108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.012502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.012521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.012773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.012793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.013066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.013084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.013327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.013346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.013602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.013622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.013990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.014010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.014290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.014309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.014575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.014594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.014918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.014938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.015119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.015138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.015430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.015449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.015776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.015794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.016038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.016058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.016385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.016404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.016748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.016768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.017022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.017044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.017298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.017317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.017591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.017610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.017890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.017909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.018231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.951 [2024-07-24 19:28:50.018252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.951 qpair failed and we were unable to recover it. 00:28:03.951 [2024-07-24 19:28:50.018630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.018650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.018905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.018925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.019231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.019251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.019485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.019505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.019696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.019719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.019982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.020001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.020280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.020298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.020577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.020596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.020852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.020872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.021126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.021145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.021356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.021376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.021617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.021637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.021932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.021951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.022225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.022245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.022567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.022586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.022928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.022948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.023153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.023173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.023363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.023383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.023632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.023650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.023952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.023973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.024209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.024228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.024513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.024532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.024821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.024841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.025031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.025050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.025322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.025341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.025666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.025686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.025921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.025940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.026192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.026212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.026413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.026432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.026748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.026768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.027043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.027062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.027318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.027337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.027648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.952 [2024-07-24 19:28:50.027667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.952 qpair failed and we were unable to recover it. 00:28:03.952 [2024-07-24 19:28:50.027984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.028004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.028256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.028275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.028602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.028628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.028899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.028920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.029240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.029260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.029492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.029510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.029744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.029764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.030020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.030038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.030219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.030237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.030578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.030596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.030930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.030949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.031285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.031305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.031632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.031652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.031910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.031930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.032180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.032199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.032454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.032474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.032791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.032811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.033010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.033029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.033301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.033320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.033657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.033676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.033888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.033907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.034104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.034123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.034305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.034324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.034570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.034589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.034900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.034920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.035164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.035183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.035376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.035396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.035571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.035590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.035853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.035873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.036062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.953 [2024-07-24 19:28:50.036080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.953 qpair failed and we were unable to recover it. 00:28:03.953 [2024-07-24 19:28:50.036250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.036268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.036556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.036596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.036876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.036894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.037089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.037104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.037351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.037367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.037609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.037624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.037944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.037959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.038146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.038160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.038352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.038367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.038582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.038596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.038774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.038788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.039031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.039047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.039281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.039300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.039624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.039639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.039804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.039819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.040063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.040077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.040241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.040255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.040609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.040623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.040905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.040920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.041221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.041235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.041522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.041537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.041796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.041812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.042000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.042015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.042267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.042282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.042653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.042669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.042978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.042993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.043209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.043224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.043578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.043593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.043886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.043901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.044066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.044081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.044403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.044417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.044680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.044695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.045003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.045018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.045190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.045204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.045410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.045425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.954 qpair failed and we were unable to recover it. 00:28:03.954 [2024-07-24 19:28:50.045648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.954 [2024-07-24 19:28:50.045663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.045987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.046002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.046201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.046215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.046548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.046563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.046801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.046817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.047018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.047033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.047347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.047362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.047655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.047670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.047917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.047932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.048196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.048211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.048514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.048528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.048774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.048790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.048974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.048990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.049224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.049239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.049473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.049488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.049813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.049828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.050137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.050151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.050311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.050327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.050563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.050577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.050867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.050899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.051217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.051232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.051410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.051425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.051747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.051763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.051927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.051943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.052192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.052208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.052492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.052508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.052702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.052722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.053001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.053023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.053272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.053295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.053552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.053568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.053896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.053911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.054071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.054089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.054342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.054358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.054563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.054585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.054829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.054848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.955 [2024-07-24 19:28:50.055026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.955 [2024-07-24 19:28:50.055042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.955 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.055225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.055243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.055449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.055466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.055640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.055657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.055946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.055970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.056165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.056181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.056447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.056463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.056722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.056738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.056979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.056994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.057227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.057261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.057420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.057440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.057740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.057760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.057968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.058008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.058297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.058343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.058667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.058688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.058955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.058977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.059252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.059271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.059590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.059609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.059936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.059955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.060256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.060275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.060568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.060587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.060947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.060966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.061204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.061224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.061429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.061448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.061624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.061643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.061846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.061865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.062153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.062171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.062355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.062374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.062563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.062582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.062743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.062762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.063058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.063076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.063235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.063254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.063481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.063500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.063670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.063689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.063884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.063904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.956 [2024-07-24 19:28:50.064073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.956 [2024-07-24 19:28:50.064092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.956 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.064262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.064279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.064449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.064464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.064800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.064815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.065031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.065045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.065270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.065284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.065444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.065458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.065793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.065807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.065974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.065988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.066198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.066212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.066375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.066389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.066637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.066651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.066799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.066813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.067032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.067046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.067330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.067346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.067504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.067519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.067725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.067740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.067900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.067913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.068066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.068080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.068386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.068401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.068631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.068645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.068954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.068968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.069125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.069139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.069301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.069315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.069441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.069454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.069670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.069684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.069842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.069857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.070101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.070114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.070259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.070273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.070444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.070459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.070621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.070635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.070950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.070964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.071196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.071210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.071498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.071512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.071677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.071691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.957 [2024-07-24 19:28:50.071911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.957 [2024-07-24 19:28:50.071926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.957 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.072152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.072167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.072311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.072326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.072555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.072569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.072719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.072734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.072961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.072975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.073189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.073203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.073345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.073359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.073535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.073549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.073767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.073781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.073936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.073951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.074108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.074121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.074274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.074288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.074531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.074545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.074787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.074802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.075018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.075033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.075256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.075270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.075484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.075498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.075805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.075819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.076102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.076118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.076366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.076379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.076688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.076702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.076943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.076958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.077242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.077255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.077474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.077488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.077643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.077656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.077832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.077846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.077987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.078001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.078145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.078158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.078307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.078320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.078467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.078481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.958 qpair failed and we were unable to recover it. 00:28:03.958 [2024-07-24 19:28:50.078787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.958 [2024-07-24 19:28:50.078801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.078967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.078982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.079196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.079209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.079446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.079460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.079768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.079782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.079926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.079940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.080169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.080182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.080511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.080525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.080687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.080700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.080925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.080939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.081234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.081247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.081472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.081486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.081654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.081668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.081849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.081863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.082074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.082088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.082339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.082353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.082513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.082526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.082738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.082752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.083042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.083055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.083338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.083352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.083672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.083685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.083973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.083988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.084135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.084150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.084310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.084323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.084493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.084507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.084647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.084661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.084877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.084891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.085104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.085117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.085397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.085412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.085559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.085572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.085861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.085875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.086121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.086135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.086292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.086305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.086542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.086556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.086691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.086705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.959 [2024-07-24 19:28:50.086936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.959 [2024-07-24 19:28:50.086950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.959 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.087092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.087105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.087318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.087331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.087576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.087590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.087840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.087854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.088086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.088099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.088381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.088395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.088680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.088693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.088932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.088946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.089092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.089105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.089388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.089401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.089567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.089581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.089888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.089902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.090136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.090150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.090292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.090305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.090529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.090543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.090693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.090707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.090944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.090958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.091131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.091143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.091513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.091527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.091672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.091686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.091933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.091947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.092281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.092294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.092589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.092603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.092831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.092845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.093059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.093072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.093380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.093394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.093610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.093624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.093841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.093854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.094087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.094101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.094273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.094286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.094508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.094522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.094783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.094797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.095009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.095023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.095352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.095365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.960 [2024-07-24 19:28:50.095597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.960 [2024-07-24 19:28:50.095611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.960 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.095904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.095918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.096200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.096214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.096524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.096537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.096849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.096863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.096999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.097013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.097231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.097244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.097547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.097561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.097773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.097787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.098087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.098100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.098332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.098346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.098520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.098533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.098863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.098877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.099106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.099120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.099299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.099312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.099566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.099579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.099859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.099873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.100115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.100129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.100291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.100304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.100588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.100602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.100835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.100848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.101022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.101035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.101316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.101329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.101557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.101571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.101719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.101732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.101969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.101984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.102264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.102277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.102587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.102600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.102848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.102862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.103094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.103107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.103270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.103284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.103609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.103623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.103926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.103939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.104179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.104192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.104507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.104520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.961 [2024-07-24 19:28:50.104743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.961 [2024-07-24 19:28:50.104758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.961 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.105000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.105013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.105224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.105238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.105571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.105584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.105806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.105820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.106051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.106065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.106355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.106368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.106675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.106688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.107047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.107061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.107296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.107309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.107597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.107611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.107838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.107853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.108064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.108078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.108239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.108253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.108466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.108479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.108704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.108721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.109005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.109018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.109302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.109315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.109596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.109609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.109928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.109941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.110254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.110267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.110631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.110644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.110954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.110968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.111133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.111146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.111367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.111381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.111626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.111639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.111964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.111977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.112259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.112273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.112574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.112587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.112746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.112759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.112992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.113007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.113286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.113300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.113604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.113618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.113860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.113874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.114105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.114119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.962 [2024-07-24 19:28:50.114422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.962 [2024-07-24 19:28:50.114435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.962 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.114743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.114757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.114984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.114997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.115217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.115231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.115530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.115543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.115869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.115883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.116054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.116068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.116294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.116308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.116518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.116532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.116832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.116846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.117069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.117082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.117314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.117327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.117640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.117653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.117956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.117969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.118176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.118189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.118402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.118415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.118696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.118709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.119040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.119054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.119345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.119358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.119637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.119650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.119907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.119920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.120198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.120211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.120453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.120466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.120657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.120671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.120894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.120908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.121233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.121246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.121494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.121508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.121767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.121780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.121995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.122009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.122314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.122327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.963 [2024-07-24 19:28:50.122638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-07-24 19:28:50.122651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.963 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.122862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.122875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.123104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.123117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.123302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.123316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.123635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.123676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.124069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.124133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.124463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.124498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.124736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.124754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.125065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.125082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.125369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.125387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.125560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.125602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.125965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.126006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.126379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.126419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.126776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.126794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.127063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.127081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.127398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.127416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.127616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.127634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.127889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.127907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.128145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.128163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.128468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.128486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.128844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.128859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.129100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.129140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.129451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.129491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.129863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.129904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.130189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.130230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.130550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.130591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.130930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.130944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.131168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.131182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.131394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.131407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.131638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.131678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.132009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.132050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.132428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.132441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.132686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.132700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.132933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.132947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.133175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.133188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.133363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-07-24 19:28:50.133376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.964 qpair failed and we were unable to recover it. 00:28:03.964 [2024-07-24 19:28:50.133520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.133534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.133823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.133864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.134153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.134193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.134557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.134597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.134954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.134968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.135193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.135206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.135465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.135478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.135760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.135800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.136092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.136132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.136471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.136513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.136805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.136819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.137057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.137071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.137349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.137362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.137670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.137711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.138030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.138071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.138429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.138459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.138634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.138647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.138961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.138975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.139275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.139288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.139582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.139596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.139916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.139957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.140201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.140240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.140536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.140577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.140909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.140951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.141314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.141355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.141735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.141776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.142041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.142055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.142347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.142387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.142747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.142789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.143170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.143211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.143582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.143596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.143862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.143875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.144173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.144213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.144573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.144613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.144899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-07-24 19:28:50.144913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.965 qpair failed and we were unable to recover it. 00:28:03.965 [2024-07-24 19:28:50.145216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.145229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.145482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.145497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.145787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.145801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.146028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.146041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.146245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.146259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.146509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.146554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.146918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.146958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.147301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.147341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.147708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.147755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.148002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.148015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.148266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.148279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.148588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.148601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.148827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.148840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.149076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.149117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.149364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.149404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.149712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.149731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.149948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.149962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.150275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.150288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.150616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.150629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.150975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.150988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.151204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.151217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.151455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.151468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.151708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.151724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.152000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.152013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.152241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.152254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.152495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.152508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.152663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.152676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.152922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.152935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.153244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.153284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.153603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.153643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.154052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.154094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.154458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.154497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.154855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.154869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.155170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.155183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.155401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.155414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.966 qpair failed and we were unable to recover it. 00:28:03.966 [2024-07-24 19:28:50.155677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.966 [2024-07-24 19:28:50.155726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.967 qpair failed and we were unable to recover it. 00:28:03.967 [2024-07-24 19:28:50.156090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-07-24 19:28:50.156131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.967 qpair failed and we were unable to recover it. 00:28:03.967 [2024-07-24 19:28:50.156493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-07-24 19:28:50.156534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.967 qpair failed and we were unable to recover it. 00:28:03.967 [2024-07-24 19:28:50.156866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-07-24 19:28:50.156879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.967 qpair failed and we were unable to recover it. 00:28:03.967 [2024-07-24 19:28:50.157038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-07-24 19:28:50.157051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.967 qpair failed and we were unable to recover it. 00:28:03.967 [2024-07-24 19:28:50.157316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-07-24 19:28:50.157329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.967 qpair failed and we were unable to recover it. 00:28:03.967 [2024-07-24 19:28:50.157584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-07-24 19:28:50.157641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.967 qpair failed and we were unable to recover it. 00:28:03.967 [2024-07-24 19:28:50.158006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-07-24 19:28:50.158048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.967 qpair failed and we were unable to recover it. 00:28:03.967 [2024-07-24 19:28:50.158407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-07-24 19:28:50.158447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.967 qpair failed and we were unable to recover it. 00:28:03.967 [2024-07-24 19:28:50.158733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-07-24 19:28:50.158747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.967 qpair failed and we were unable to recover it. 00:28:03.967 [2024-07-24 19:28:50.159024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-07-24 19:28:50.159037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.967 qpair failed and we were unable to recover it. 00:28:03.967 [2024-07-24 19:28:50.159262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-07-24 19:28:50.159275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:03.967 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.159573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.159588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.159751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.159764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.160096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.160109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.160356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.160395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.160686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.160768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.160992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.161033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.161385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.161425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.161782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.161796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.161956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.161970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.162196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.162210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.162490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.162530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.162843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.162885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.163226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.163266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.163577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.163618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.163989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.164002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.164306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.164319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.164538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.164551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.164830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.164843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.165057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.165097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.165391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.165432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.165651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.165664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.165886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.165901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.166111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.166124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.166401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.166414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.166647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.166661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.166945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.166959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.167262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.167302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.167535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.167576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.241 [2024-07-24 19:28:50.167929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.241 [2024-07-24 19:28:50.167942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.241 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.168160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.168174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.168437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.168450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.168700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.168718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.168998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.169011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.169256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.169269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.169569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.169584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.169910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.169923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.170249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.170263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.170595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.170619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.170886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.170900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.171137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.171187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.171588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.171628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.171922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.171963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.172329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.172370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.172728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.172773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.173126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.173140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.173347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.173360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.173607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.173621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.173925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.173939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.174194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.174234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.174626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.174666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.175041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.175083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.175445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.175485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.175757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.175771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.176069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.176083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.176275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.176288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.176508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.176522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.176824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.176837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.177134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.177175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.177459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.177500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.177747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.177761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.178045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.178058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.178303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.178317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.242 [2024-07-24 19:28:50.178628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.242 [2024-07-24 19:28:50.178642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.242 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.178909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.178950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.179292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.179332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.179698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.179713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.179966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.179979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.180256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.180269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.180553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.180566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.180832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.180846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.181131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.181171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.181486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.181526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.181854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.181867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.182064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.182078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.182384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.182399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.182691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.182705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.182993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.183006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.183326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.183340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.183598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.183611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.183918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.183959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.184308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.184348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.184616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.184630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.184874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.184887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.185043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.185056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.185357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.185370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.185599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.185612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.185861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.185875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.186179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.186192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.186562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.186602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.186880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.186914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.187136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.187149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.187370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.187383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.187564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.187577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.187877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.187891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.188195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.188209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.188402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.188415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.188663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.188676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.243 [2024-07-24 19:28:50.188986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.243 [2024-07-24 19:28:50.189028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.243 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.189369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.189409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.189708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.189726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.189956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.189969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.190281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.190294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.190484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.190497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.190788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.190801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.191096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.191136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.191492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.191533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.191907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.191921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.192198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.192212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.192496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.192510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.192735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.192749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.196727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.196753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.197089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.197104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.197341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.197354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.197660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.197680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.197937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.197955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.198217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.198233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.198411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.198427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.198739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.198756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.199002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.199019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.199285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.199301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.199598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.199616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.200074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.200092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.200396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.200414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.200726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.200740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.201039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.201054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.201261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.201274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.201593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.201607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.201941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.201983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.202352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.202392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.202755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.202769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.202983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.202996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.203227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.203240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.203541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.244 [2024-07-24 19:28:50.203554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.244 qpair failed and we were unable to recover it. 00:28:04.244 [2024-07-24 19:28:50.203833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.203847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.204106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.204120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.204310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.204323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.204602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.204642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.205020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.205062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.205339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.205379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.205665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.205679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.205830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.205844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.206089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.206103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.206404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.206417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.206692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.206705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.206892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.206933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.207229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.207269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.207490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.207531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.207917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.207951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.208178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.208192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.208525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.208539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.208731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.208744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.209084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.209124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.209398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.209438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.209727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.209767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.210000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.210016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.210244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.210257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.210419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.210432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.210770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.210783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.211044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.211085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.211472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.211512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.211772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.211786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.212014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.212027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.212326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.212339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.212584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.212597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.212808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.212822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.213158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.213171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.213328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.213341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.213517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.245 [2024-07-24 19:28:50.213530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.245 qpair failed and we were unable to recover it. 00:28:04.245 [2024-07-24 19:28:50.213816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.213857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.214126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.214166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.214452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.214492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.214808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.214821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.215126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.215166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.215441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.215482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.215689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.215703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.215937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.215951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.216202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.216235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.216604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.216645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.216881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.216895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.217217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.217230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.217532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.217571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.217918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.217960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.218224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.218264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.218543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.218584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.218936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.218950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.219198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.219238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.219525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.219565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.219870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.219883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.220177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.220217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.220581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.220622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.220988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.221028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.221338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.221378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.221662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.221702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.222027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.222067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.222424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.222469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.222817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.246 [2024-07-24 19:28:50.222857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.246 qpair failed and we were unable to recover it. 00:28:04.246 [2024-07-24 19:28:50.223171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.223184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.223506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.223519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.223757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.223770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.224084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.224124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.224414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.224454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.224787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.224829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.225169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.225210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.225574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.225614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.225886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.225900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.226230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.226269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.226551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.226591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.226887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.226900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.227128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.227142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.227451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.227491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.227774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.227816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.228178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.228218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.228600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.228640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.229017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.229051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.229277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.229290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.229598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.229638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.230017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.230058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.230367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.230407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.230746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.230788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.231063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.231103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.231487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.231526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.231845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.231887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.232165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.232189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.232416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.232429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.232651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.232664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.232946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.232960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.233239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.233280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.233499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.233538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.233712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.233730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.234059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.247 [2024-07-24 19:28:50.234099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.247 qpair failed and we were unable to recover it. 00:28:04.247 [2024-07-24 19:28:50.234324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.234364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.234760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.234801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.235157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.235192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.235483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.235523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.235785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.235801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.236089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.236129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.236350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.236389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.236663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.236703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.237061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.237101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.237369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.237409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.237724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.237766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.238137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.238178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.238538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.238579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.238857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.238898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.239263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.239275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.239599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.239639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.240018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.240059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.240328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.240342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.240585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.240598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.240851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.240886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.241137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.241178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.241553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.241594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.241952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.241985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.242260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.242272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.242504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.242517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.242824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.242865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.243150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.243191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.243533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.243574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.243880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.243893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.244223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.244263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.244625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.244666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.245046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.245088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.245372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.245412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.245775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.245816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.246159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.246199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.248 [2024-07-24 19:28:50.246476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.248 [2024-07-24 19:28:50.246516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.248 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.246877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.246918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.247234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.247274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.247631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.247672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.247986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.248028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.248414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.248454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.248816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.248858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.249084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.249125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.249515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.249555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.249906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.249953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.250256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.250296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.250586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.250626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.250988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.251030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.251302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.251343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.251644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.251685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.252059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.252099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.252414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.252454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.252814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.252857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.253224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.253237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.253512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.253524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.253751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.253764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.254062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.254102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.254458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.254498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.254863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.254904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.255139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.255179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.255521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.255562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.255847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.255860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.256080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.256121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.256414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.256455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.256729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.256742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.256951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.256964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.257173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.257187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.257418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.257432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.257655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.257696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.257945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.257985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.249 [2024-07-24 19:28:50.258250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.249 [2024-07-24 19:28:50.258263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.249 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.258588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.258629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.258918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.258959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.259273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.259314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.259685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.259735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.260080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.260120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.260485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.260526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.260790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.260804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.261085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.261125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.261395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.261436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.261789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.261847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.262134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.262174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.262490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.262531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.262833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.262874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.263213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.263265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.263600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.263641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.264047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.264088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.264398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.264439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.264749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.264790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.265031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.265071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.265407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.265447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.265743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.265784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.266151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.266193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.266558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.266598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.266867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.266880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.267174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.267209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.267425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.267465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.267740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.267785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.267942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.267956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.268244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.268285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.268504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.268545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.268822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.268835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.269146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.269186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.269473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.269513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.269873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.269914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.270288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.250 [2024-07-24 19:28:50.270327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.250 qpair failed and we were unable to recover it. 00:28:04.250 [2024-07-24 19:28:50.270478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.270519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.270804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.270845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.271210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.271251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.271522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.271563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.271764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.271777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.272079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.272093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.272333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.272345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.272734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.272775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.273060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.273101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.273388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.273428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.273697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.273748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.273973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.274014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.274136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.274149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.274482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.274523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.274825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.274866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.275166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.275189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.275382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.275396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.275691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.275745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.276086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.276133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.276423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.276464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.276779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.276819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.277041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.277081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.277448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.277489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.277827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.277869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.278245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.278268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.278579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.278620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.278904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.278917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.279247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.279286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.279600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.279641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.279942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.251 [2024-07-24 19:28:50.279983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.251 qpair failed and we were unable to recover it. 00:28:04.251 [2024-07-24 19:28:50.280355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.280394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.280613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.280652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.281009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.281051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.281408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.281448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.281689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.281737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.281969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.282004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.282306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.282345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.282734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.282776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.283129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.283169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.283457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.283498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.283777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.283819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.284161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.284202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.284487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.284528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.284813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.284826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.285135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.285174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.285519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.285559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.285920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.285962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.286263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.286275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.286521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.286534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.286761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.286775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.287065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.287106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.287394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.287434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.287796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.287837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.288140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.288180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.288530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.288571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.288858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.288899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.289223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.289263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.289622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.289662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.289891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.289937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.290283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.290322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.290601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.290641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.290915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.290929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.291089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.291101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.291407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.291447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.291816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.252 [2024-07-24 19:28:50.291829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.252 qpair failed and we were unable to recover it. 00:28:04.252 [2024-07-24 19:28:50.292054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.292067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.292293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.292306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.292615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.292655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.292935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.292976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.293315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.293356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.293710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.293776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.294058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.294070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.294288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.294301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.294609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.294649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.295036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.295077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.295338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.295361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.295583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.295596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.295887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.295928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.296266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.296305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.296526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.296567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.296849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.296890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.297180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.297193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.297470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.297483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.297773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.297787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.297962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.297975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.298296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.298337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.298693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.298743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.299006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.299018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.299313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.299353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.299574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.299614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.299884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.299926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.300302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.300343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.300698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.300746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.301112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.301152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.301516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.301557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.301842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.301884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.302056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.302096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.302384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.302397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.302703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.302753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.303123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.253 [2024-07-24 19:28:50.303164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.253 qpair failed and we were unable to recover it. 00:28:04.253 [2024-07-24 19:28:50.303375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.303388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.303683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.303748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.304136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.304177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.304414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.304455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.304683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.304734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.305008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.305048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.305320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.305361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.305601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.305642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.305937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.305979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.306240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.306263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.306472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.306484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.306780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.306794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.306945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.306996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.307365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.307404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.307697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.307747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.308019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.308067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.308394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.308434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.308705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.308755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.309045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.309085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.309379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.309420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.309758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.309799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.310148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.310187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.310575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.310617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.310901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.310942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.311205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.311218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.311442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.311457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.311699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.311766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.312105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.312145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.312527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.312567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.312895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.312936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.313199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.313212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.313372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.313411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.313740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.313781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.314133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.314176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.314454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.254 [2024-07-24 19:28:50.314494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.254 qpair failed and we were unable to recover it. 00:28:04.254 [2024-07-24 19:28:50.314769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.314810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.315172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.315212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.315564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.315605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.315896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.315938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.316308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.316348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.316641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.316681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.317034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.317075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.317350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.317390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.317742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.317783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.318144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.318184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.318473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.318513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.318817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.318858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.319223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.319263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.319434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.319474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.319783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.319825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.320049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.320063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.320294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.320307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.320544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.320557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.320799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.320812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.321047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.321087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.321389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.321429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.321731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.321772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.322051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.322092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.322327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.322366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.322702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.322756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.323100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.323140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.323406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.323419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.323750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.323791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.324149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.324190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.324514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.324527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.324824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.324838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.325122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.325161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.325436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.325476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.325849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.325890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.326240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.255 [2024-07-24 19:28:50.326280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.255 qpair failed and we were unable to recover it. 00:28:04.255 [2024-07-24 19:28:50.326593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.326633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.326940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.326981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.327315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.327329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.327496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.327510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.327740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.327753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.327971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.328011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.328351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.328392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.328627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.328667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.328898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.328940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.329312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.329352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.329726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.329766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.329961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.329974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.330151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.330192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.330532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.330572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.330863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.330904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.331258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.331298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.331587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.331627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.331947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.331988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.332281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.332320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.332661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.332702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.332999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.333012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.333159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.333199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.333475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.333515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.333861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.333902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.334134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.334186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.334347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.334360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.334589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.334603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.334825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.334839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.335054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.335093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.335306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.335347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.256 qpair failed and we were unable to recover it. 00:28:04.256 [2024-07-24 19:28:50.335633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.256 [2024-07-24 19:28:50.335673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.335958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.335999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.336350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.336390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.336753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.336766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.336994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.337007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.337299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.337344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.337615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.337656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.337941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.337982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.338369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.338410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.338679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.338727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.339101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.339115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.339358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.339398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.339756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.339807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.340119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.340159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.340460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.340501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.340770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.340810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.341143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.341184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.341475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.341515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.341878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.341919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.342265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.342306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.342589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.342629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.342908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.342948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.343315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.343355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.343598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.343638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.343936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.343977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.344320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.344332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.344565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.344577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.344872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.344912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.345251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.345291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.345628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.345669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.346077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.346118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.346324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.346363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.346585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.346624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.346914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.346955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.347234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.347248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.257 qpair failed and we were unable to recover it. 00:28:04.257 [2024-07-24 19:28:50.347523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.257 [2024-07-24 19:28:50.347536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.347632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.347683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.348015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.348056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.348284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.348324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.348620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.348661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.348960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.349001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.349289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.349329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.349693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.349743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.349959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.349972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.350280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.350293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.350591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.350627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.350835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.350876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.351111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.351151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.351504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.351549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.351771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.351812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.352101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.352151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.352389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.352402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.352653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.352693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.353089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.353130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.353410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.353450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.353735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.353776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.354038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.354052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.354290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.354313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.354487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.354500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.354673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.354712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.355066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.355107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.355310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.355350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.355640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.355679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.356075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.356089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.356383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.356396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.356615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.356655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.357027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.357068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.357339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.357379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.357738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.357780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.358073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.358113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.258 [2024-07-24 19:28:50.358332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.258 [2024-07-24 19:28:50.358371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.258 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.358667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.358706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.359053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.359066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.359364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.359376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.359531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.359544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.359639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.359652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.359811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.359824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.360124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.360165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.360437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.360476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.360772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.360813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.361153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.361167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.361471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.361511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.361850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.361891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.362207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.362248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.362519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.362559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.362919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.362965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.363184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.363223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.363442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.363483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.363842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.363883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.364276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.364323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.364559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.364572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.364752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.364765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.364992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.365005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.365165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.365177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.365393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.365406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.365740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.365781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.366171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.366211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.366597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.366637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.366861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.366902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.367134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.367174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.367529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.367570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.367791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.367833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.368049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.368089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.368359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.368399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.368758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.368799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.259 [2024-07-24 19:28:50.369006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.259 [2024-07-24 19:28:50.369046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.259 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.369335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.369375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.369723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.369764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.370123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.370163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.370468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.370481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.370725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.370766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.371151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.371191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.371453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.371466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.371562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.371574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.371748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.371762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.372128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.372168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.372525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.372538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.372707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.372723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.372965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.373005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.373375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.373415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.373705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.373753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.374083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.374096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.374318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.374331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.374629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.374643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.374800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.374814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.375037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.375051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.375273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.375286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.375527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.375541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.375755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.375769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.376053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.376066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.376288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.376302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.376475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.376488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.376785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.376798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.376978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.376991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.377143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.377156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.377382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.377422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.377731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.377772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.378043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.378082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.260 [2024-07-24 19:28:50.378430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.260 [2024-07-24 19:28:50.378442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.260 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.378604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.378617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.378931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.378945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.379203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.379217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.379382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.379395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.379678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.379728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.380089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.380130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.380365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.380378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.380676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.380690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.380919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.380932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.381154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.381167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.381336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.381349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.381543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.381581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.381894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.381935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.382299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.382312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.382587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.382600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.382820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.382833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.383041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.383054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.383283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.383296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.383445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.383458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.383766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.383807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.384103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.384143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.384448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.384461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.384623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.384636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.384846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.384860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.385135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.385148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.385369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.385382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.385695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.385751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.386097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.386137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.386526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.386539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.386699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.386712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.387010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.387024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.261 [2024-07-24 19:28:50.387245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.261 [2024-07-24 19:28:50.387258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.261 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.387513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.387552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.387774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.387814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.388153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.388193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.388479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.388519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.388677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.388725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.389090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.389130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.389411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.389451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.389740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.389781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.390129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.390170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.390532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.390571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.390872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.390913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.391218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.391231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.391444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.391457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.391668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.391681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.391847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.391860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.392081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.392094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.392317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.392330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.392564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.392604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.392944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.392985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.393197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.393237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.393508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.393521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.393761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.393774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.394015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.394029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.394303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.394317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.394460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.394473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.394721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.394735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.394943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.394957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.395182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.395195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.395421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.395434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.395530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.395542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.395841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.395855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.396081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.396094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.396376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.396415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.396637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.396676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.262 [2024-07-24 19:28:50.396969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.262 [2024-07-24 19:28:50.397016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.262 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.397248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.397287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.397576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.397589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.397805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.397818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.398096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.398110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.398268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.398281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.398488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.398501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.398643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.398656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.398826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.398840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.399081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.399094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.399257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.399269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.399436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.399450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.399771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.399785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.399977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.400017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.400314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.400354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.400549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.400562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.400860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.400874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.401097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.401110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.401344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.401357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.401454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.401466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.401690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.401702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.401953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.401967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.402262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.402295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.402583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.402623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.402911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.402952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.403167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.403181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.403389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.403402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.403696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.403709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.403872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.403885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.404117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.404130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.404378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.404418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.404774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.404815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.405198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.405238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.405591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.405631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.405904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.405945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.263 [2024-07-24 19:28:50.406218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.263 [2024-07-24 19:28:50.406259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.263 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.406546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.406585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.406800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.406841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.407198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.407237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.407522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.407535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.407783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.407829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.408105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.408118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.408333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.408373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.408673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.408713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.409104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.409145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.409479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.409493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.409793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.409834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.410045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.410085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.410361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.410412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.410696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.410709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.411011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.411024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.411302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.411315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.411567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.411580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.411741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.411754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.411974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.411987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.412201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.412214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.412489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.412502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.412718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.412731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.413029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.413042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.413194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.413208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.413484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.413497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.413763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.413776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.414051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.414089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.414463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.414474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.414723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.414737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.415094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.415107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.415338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.415352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.415513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.415526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.415744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.415785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.416000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.416040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.264 [2024-07-24 19:28:50.416272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.264 [2024-07-24 19:28:50.416285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.264 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.416521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.416534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.416762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.416776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.416996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.417009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.417230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.417243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.417479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.417519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.417807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.417848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.418148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.418188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.418411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.418451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.418737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.418750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.418969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.418983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.419207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.419247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.419537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.419577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.419946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.419987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.420272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.420312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.420582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.420595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.420743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.420757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.421004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.421044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.421342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.421382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.421663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.421676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.421952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.421966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.422263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.422277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.422495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.422507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.422663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.422676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.422965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.423006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.423340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.423353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.423585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.423598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.423820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.423833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.424056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.424068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.424344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.424357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.424675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.424688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.424834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.424848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.265 qpair failed and we were unable to recover it. 00:28:04.265 [2024-07-24 19:28:50.425058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.265 [2024-07-24 19:28:50.425071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.425314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.425353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.425564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.425605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.425890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.425932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.426215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.426228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.426461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.426474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.426702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.426719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.426994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.427007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.427235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.427248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.427350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.427362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.427599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.427612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.427765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.427779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.427918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.427931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.428152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.428165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.428400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.428439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.428752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.428793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.429064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.429103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.429465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.429504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.429807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.429822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.430049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.430063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.430308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.430322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.430541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.430554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.430768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.430781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.431015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.431029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.431186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.431199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.431520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.431533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.431756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.431768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.431993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.432006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.432156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.432170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.432415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.432428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.432636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.432649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.432816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.432857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.266 [2024-07-24 19:28:50.433232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.266 [2024-07-24 19:28:50.433273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.266 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.433541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.433554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.433791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.433804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.433979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.433992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.434197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.434210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.434417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.434430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.434670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.434683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.434964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.434977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.435197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.435210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.435383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.435396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.435544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.435557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.435816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.435857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.436222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.436234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.436513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.436526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.436800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.436850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.437120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.437160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.437460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.437473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.437785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.437825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.438114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.438154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.438483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.438497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.438719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.438732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.438947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.438960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.439112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.439126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.439427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.439440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.439608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.439622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.439842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.439856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.440173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.440218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.440395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.440434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.440777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.440817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.441033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.441073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.441309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.441349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.441635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.441648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.441875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.441888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.441968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.441980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.442215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.442228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.267 [2024-07-24 19:28:50.442436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.267 [2024-07-24 19:28:50.442450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.267 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.442608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.442622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.442904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.442918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.443143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.443156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.443447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.443487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.443859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.443900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.444182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.444222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.444516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.444556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.444772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.444813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.445175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.445188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.445495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.445535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.445823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.445864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.446080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.446120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.446430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.446465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.446691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.446704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.446912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.446926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.447238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.447285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.447596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.447635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.447962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.448003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.448384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.448423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.448767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.448808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.449043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.449083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.449361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.449401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.449677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.449690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.449849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.449862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.450088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.450128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.450408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.450449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.450793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.450834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.451137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.451150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.451385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.451425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.451777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.451819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.452179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.452219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.452592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.452632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.452905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.452946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.453238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.453278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.268 [2024-07-24 19:28:50.453640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.268 [2024-07-24 19:28:50.453681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.268 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.453992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.454038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.454323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.454363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.454634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.454674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.454967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.455008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.455346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.455386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.455606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.455646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.456020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.456060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.456396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.456436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.456741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.456782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.457075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.457116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.457361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.457373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.457608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.457621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.457948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.457988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.458205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.458245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.458535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.458575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.458873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.458914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.459182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.459222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.459496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.459536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.459898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.459939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.460237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.460278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.460575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.460588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.460931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.460944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.461196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.461241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.461557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.461570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.461761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.461774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.462103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.462143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.462428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.462468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.462837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.462850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.463038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.463052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.463354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.463394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.463670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.463710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.464040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.464080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.464295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.464335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.464689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.464742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.269 [2024-07-24 19:28:50.465100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.269 [2024-07-24 19:28:50.465141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.269 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.465455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.465494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.465728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.465743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.465960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.465973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.466124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.466137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.466314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.466326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.466540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.466580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.466860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.466900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.467183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.467231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.467534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.467574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.467913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.467955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.468184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.468224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.468571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.468612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.468989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.469030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.469432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.469472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.469837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.469895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.470187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.547 [2024-07-24 19:28:50.470226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.547 qpair failed and we were unable to recover it. 00:28:04.547 [2024-07-24 19:28:50.470583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.470623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.470897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.470938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.471207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.471247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.471611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.471652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.471929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.471970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.472330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.472370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.472655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.472695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.473023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.473062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.473317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.473330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.473554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.473568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.473805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.473846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.474131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.474176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.474389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.474429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.474781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.474794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.475099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.475139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.475354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.475393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.475618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.475631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.475856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.475870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.476103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.476144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.476430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.476469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.476829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.476871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.477163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.477203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.477484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.477523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.477796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.477837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.478140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.478180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.478546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.478586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.478803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.478843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.479121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.479162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.479348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.479361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.479688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.479753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.480098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.480139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.480488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.480500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.480752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.480780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.481124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.548 [2024-07-24 19:28:50.481165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.548 qpair failed and we were unable to recover it. 00:28:04.548 [2024-07-24 19:28:50.481438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.481478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.481838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.481879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.482237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.482277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.482572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.482613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.482933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.482974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.483263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.483303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.483659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.483699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.484097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.484138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.484407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.484447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.484726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.484749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.485049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.485089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.485448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.485488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.485775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.485788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.486166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.486206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.486533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.486574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.486916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.486957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.487172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.487223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.487438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.487452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.487763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.487804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.488087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.488127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.488473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.488513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.488824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.488865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.489153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.489193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.489553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.489592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.489866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.489907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.490226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.490273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.490444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.490457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.490754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.490784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.491035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.491048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.491210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.491249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.491564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.491577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.491837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.491850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.492177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.492217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.492507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.492546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.492828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.492841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.549 qpair failed and we were unable to recover it. 00:28:04.549 [2024-07-24 19:28:50.493087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.549 [2024-07-24 19:28:50.493100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.493337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.493349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.493570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.493583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.493877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.493919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.494208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.494248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.494558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.494598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.494872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.494913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.495271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.495311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.495689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.495746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.496073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.496114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.496471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.496483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.496625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.496638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.496918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.496959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.497248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.497288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.497625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.497665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.497974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.497987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.498192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.498205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.498428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.498441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.498748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.498789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.499014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.499054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.499424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.499464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.499778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.499819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.500089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.500134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.500399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.500413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.500687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.500700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.500986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.501000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.501297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.501337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.501677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.501725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.502036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.502076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.502454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.502496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.502800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.502841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.503065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.503105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.503448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.503488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.503711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.503772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.504130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.550 [2024-07-24 19:28:50.504171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.550 qpair failed and we were unable to recover it. 00:28:04.550 [2024-07-24 19:28:50.504507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.504547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.504915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.504929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.505173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.505210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.505500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.505539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.505812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.505854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.506147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.506188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.506403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.506416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.506666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.506706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.507023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.507064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.507359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.507399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.507615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.507655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.508047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.508089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.508390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.508430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.508769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.508811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.509168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.509209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.509492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.509504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.509733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.509775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.510058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.510097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.510435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.510476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.510626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.510667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.510848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.510862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.511122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.511135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.511436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.511449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.511744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.511785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.512127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.512167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.512462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.512474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.512694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.512746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.513051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.513097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.513403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.513442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.513805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.513846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.514060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.514101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.514438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.514478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.514833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.514874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.515095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.515136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.551 qpair failed and we were unable to recover it. 00:28:04.551 [2024-07-24 19:28:50.515484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.551 [2024-07-24 19:28:50.515524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.515890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.515931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.516234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.516274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.516557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.516570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.516828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.516841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.517064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.517077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.517381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.517422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.517633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.517673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.517956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.517997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.518357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.518396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.518736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.518777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.519059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.519099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.519448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.519489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.519777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.519818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.520041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.520081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.520417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.520458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.520700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.520745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.521065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.521105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.521442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.521481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.521745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.521768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.521985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.521998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.522295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.522307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.522611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.522651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.523002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.523043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.523260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.523301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.523572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.523612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.523856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.523870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.524115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.524147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.552 qpair failed and we were unable to recover it. 00:28:04.552 [2024-07-24 19:28:50.524485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.552 [2024-07-24 19:28:50.524525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.524797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.524838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.525204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.525245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.525582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.525622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.525846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.525886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.526107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.526152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.526503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.526516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.526743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.526756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.526970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.526984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.527240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.527292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.527628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.527669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.527967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.528008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.528280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.528321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.528569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.528583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.528802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.528815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.529037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.529050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.529336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.529376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.529661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.529702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.530007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.530048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.530276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.530316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.530518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.530559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.530918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.530959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.531173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.531214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.531498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.531539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.531760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.531801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.531969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.532009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.532293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.532333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.532662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.532703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.533021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.533061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.533384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.533424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.533697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.533748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.534109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.534149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.534374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.534414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.534769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.534811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.535196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.535236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.553 [2024-07-24 19:28:50.535510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.553 [2024-07-24 19:28:50.535523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.553 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.535823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.535864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.536088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.536128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.536495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.536535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.536805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.536818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.537141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.537181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.537497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.537537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.537876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.537917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.538253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.538293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.538630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.538674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.538973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.538988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.539169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.539210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.539440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.539481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.539832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.539846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.540090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.540104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.540338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.540350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.540515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.540528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.540665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.540678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.540894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.540935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.541208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.541248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.541523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.541535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.541760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.541773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.542104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.542144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.542443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.542484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.542683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.542696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.542795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.542807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.543015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.543028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.543309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.543357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.543604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.543617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.543924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.543965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.544281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.544321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.544545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.544585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.544874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.544915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.545219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.545259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.545555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.545594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.554 qpair failed and we were unable to recover it. 00:28:04.554 [2024-07-24 19:28:50.545869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.554 [2024-07-24 19:28:50.545920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.546204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.546244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.546612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.546652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.546954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.546995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.547282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.547322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.547544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.547584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.547871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.547884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.548097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.548110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.548289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.548302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.548549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.548590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.548884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.548926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.549241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.549281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.549579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.549620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.549969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.550010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.550372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.550413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.550728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.550775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.551013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.551055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.551393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.551434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.551637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.551650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.551863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.551876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.552106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.552119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.552347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.552360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.552546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.552559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.552703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.552752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.553065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.553105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.553335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.553375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.553558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.553571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.554032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.554068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.554231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.554244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.554521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.554534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.554837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.554850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.555057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.555071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.555284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.555298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.555455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.555469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.555626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.555640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.555 [2024-07-24 19:28:50.555850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.555 [2024-07-24 19:28:50.555864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.555 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.556087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.556100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.556375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.556389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.556610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.556625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.556838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.556852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.557098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.557111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.557271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.557284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.557506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.557519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.557680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.557694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.557910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.557924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.558084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.558096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.558257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.558271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.558511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.558524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.558734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.558748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.558898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.558911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.559231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.559243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.559388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.559401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.559564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.559578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.559855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.559869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.560024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.560037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.560244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.560260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.560428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.560468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.560689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.560742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.560957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.560997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.561300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.561340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.561695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.561708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.561945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.561959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.562070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.562084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.562305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.562318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.562594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.562608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.562769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.562783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.563077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.563091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.563300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.563312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.563544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.563558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.563783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.563797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.556 qpair failed and we were unable to recover it. 00:28:04.556 [2024-07-24 19:28:50.563953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.556 [2024-07-24 19:28:50.563965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.564201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.564214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.564438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.564451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.564612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.564626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.564834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.564847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.565005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.565018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.565244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.565257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.565395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.565408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.565533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.565572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.565861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.565902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.566219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.566259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.566597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.566638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.567021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.567100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.567354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.567399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.567565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.567607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.567883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.567925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.568263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.568303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.568510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.568551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.568762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.568780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.568949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.568990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.569214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.569254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.569536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.569576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.569876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.569895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.570132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.570150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.570389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.570406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.570642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.570660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.570925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.570943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.571181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.571198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.571438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.571455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.571692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.557 [2024-07-24 19:28:50.571709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.557 qpair failed and we were unable to recover it. 00:28:04.557 [2024-07-24 19:28:50.572032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.572049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.572221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.572238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.572477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.572494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.572661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.572680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.572904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.572921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.573217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.573257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.573562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.573602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.573834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.573859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.574041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.574081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.574433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.574480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.574769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.574787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.575006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.575023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.575186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.575203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.575431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.575471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.575676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.575729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.575936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.575976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.576356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.576396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.576583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.576601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.576768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.576809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.577078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.577118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.577335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.577375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.577639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.577656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.577875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.577893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.578182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.578199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.578432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.578449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.578629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.578646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.578823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.578864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.579250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.579290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.579563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.579604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.579961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.580002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.580361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.580401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.580613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.580630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.580800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.580818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.581107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.581147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.558 [2024-07-24 19:28:50.581365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.558 [2024-07-24 19:28:50.581405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.558 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.581744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.581786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.582055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.582100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.582437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.582477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.582683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.582731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.583013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.583030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.583356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.583395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.583665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.583705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.583991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.584032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.584260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.584300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.584594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.584634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.584910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.584951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.585308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.585348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.585680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.585698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.585963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.585980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.586295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.586336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.586631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.586672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.586969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.587010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.587298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.587338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.587626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.587666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.588022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.588064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.588343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.588384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.588738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.588779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.589085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.589125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.589399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.589439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.589747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.589789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.590154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.590195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.590564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.590604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.590879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.590896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.591182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.591202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.591376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.591393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.591705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.591726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.591950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.591991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.592290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.592330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.592632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.592649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.559 [2024-07-24 19:28:50.592868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.559 [2024-07-24 19:28:50.592887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.559 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.593173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.593213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.593488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.593528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.593817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.593858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.594196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.594237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.594604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.594644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.594920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.594938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.595112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.595130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.595366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.595408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.595681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.595730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.596079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.596097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.596410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.596450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.596760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.596801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.596998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.597016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.597305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.597322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.597624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.597664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.597900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.597941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.598216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.598257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.598552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.598593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.598890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.598908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.599245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.599286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.599638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.599678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.599973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.599991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.600290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.600329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.600602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.600642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.601025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.601066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.601359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.601399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.601730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.601770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.602089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.602130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.602426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.602466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.602762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.602780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.603079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.603119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.603399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.603439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.603711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.603759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.604122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.604162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.560 [2024-07-24 19:28:50.604510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.560 [2024-07-24 19:28:50.604552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.560 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.604863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.604881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.605113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.605153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.605366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.605406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.605685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.605734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.605948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.605988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.606347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.606388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.606629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.606668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.606951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.606969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.607275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.607292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.607604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.607643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.608016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.608057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.608283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.608324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.608653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.608671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.608859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.608877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.609046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.609064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.609307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.609347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.609676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.609693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.609916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.609934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.610249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.610289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.610626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.610666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.611014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.611056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.611353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.611393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.611667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.611685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.611859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.611876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.612207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.612225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.612563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.612580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1688131 Killed "${NVMF_APP[@]}" "$@" 00:28:04.561 [2024-07-24 19:28:50.612812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.612830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.613066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.613084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.613339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.613357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:04.561 [2024-07-24 19:28:50.613641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.613659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.613921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:04.561 [2024-07-24 19:28:50.613939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 [2024-07-24 19:28:50.614101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.614118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:04.561 [2024-07-24 19:28:50.614334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.561 [2024-07-24 19:28:50.614352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.561 qpair failed and we were unable to recover it. 00:28:04.561 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:04.562 [2024-07-24 19:28:50.614582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.614600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.614757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.614775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.562 [2024-07-24 19:28:50.615011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.615029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.615255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.615273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.615530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.615547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.615796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.615814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.616131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.616149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.616385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.616402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.616575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.616592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.616908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.616926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.617178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.617195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.617500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.617518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.617739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.617757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.617977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.617994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.618158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.618174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.618408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.618425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.618732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.618749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.618990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.619007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.619173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.619190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.619408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.619425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.619725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.619743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.619907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.619924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.620153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.620170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.620471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.620488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.620773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.620791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.621088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.621106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.621337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.621354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.621515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.621533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.621737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.562 [2024-07-24 19:28:50.621755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.562 qpair failed and we were unable to recover it. 00:28:04.562 [2024-07-24 19:28:50.622040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.622058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.622298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.622315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.622603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.622622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.622872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.622891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1688953 00:28:04.563 [2024-07-24 19:28:50.623123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.623141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1688953 00:28:04.563 [2024-07-24 19:28:50.623306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.623323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.623479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.623496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.623601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.623617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1688953 ']' 00:28:04.563 [2024-07-24 19:28:50.623765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.623784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.623955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.623973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.563 [2024-07-24 19:28:50.624130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.624148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:04.563 [2024-07-24 19:28:50.624377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.624394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.563 [2024-07-24 19:28:50.624700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.624724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:04.563 [2024-07-24 19:28:50.625032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.625050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 19:28:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.563 [2024-07-24 19:28:50.625217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.625234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.625530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.625547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.625876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.625893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.626227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.626245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.626480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.626497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.626681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.626698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.626876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.626893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.627112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.627130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.627440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.627457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.627696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.627713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.627976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.627994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.628229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.628246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.628484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.628501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.628779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.628797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.629106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.629124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.629242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.629259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.629565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.629583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.563 qpair failed and we were unable to recover it. 00:28:04.563 [2024-07-24 19:28:50.629804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.563 [2024-07-24 19:28:50.629822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.630057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.630074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.630302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.630319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.630604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.630621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.630864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.630882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.631034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.631051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.631205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.631222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.631503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.631540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.631862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.631882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.631987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.632004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.632233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.632251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.632402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.632421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.632641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.632658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.632876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.632894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.633152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.633169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.633409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.633426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.633659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.633679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.633977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.633995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.634167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.634185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.634413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.634430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.634660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.634682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.634913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.634931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.635102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.635120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.635352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.635369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.635655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.635672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.635959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.635977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.636283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.636301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.636585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.636603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.636912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.636929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.637149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.637167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.637340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.637357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.637633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.637650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.637811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.637830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.638118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.638136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.638427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.638444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.564 [2024-07-24 19:28:50.638766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.564 [2024-07-24 19:28:50.638784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.564 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.639001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.639019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.639239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.639256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.639588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.639605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.639842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.639860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.640030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.640048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.640228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.640246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.640484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.640501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.640809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.640826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.641059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.641077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.641313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.641331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.641638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.641655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.641949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.641971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.642258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.642275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.642526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.642544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.642832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.642849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.643082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.643099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.643271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.643288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.643520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.643537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.643825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.643843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.644069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.644087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.644324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.644341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.644564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.644582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.644877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.644894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.645180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.645197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.645433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.645451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.645678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.645695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.645871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.645891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.646205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.646223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.646456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.646473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.646699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.646721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.646951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.646968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.647238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.647255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.647420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.647436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.647667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.647684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.647853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.565 [2024-07-24 19:28:50.647871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.565 qpair failed and we were unable to recover it. 00:28:04.565 [2024-07-24 19:28:50.648157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.648174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.648390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.648407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.648692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.648709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.648942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.648962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.649202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.649219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.649336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.649353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.649549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.649567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.649816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.649834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.650116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.650134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.650371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.650388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.650604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.650622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.650929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.650947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.651196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.651213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.651500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.651517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.651723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.651741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.652042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.652059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.652294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.652312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.652485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.652503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.652777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.652794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.653018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.653035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.653258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.653276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.653431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.653448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.653735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.653753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.654007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.654025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.654314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.654331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.654553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.654570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.654735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.654753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.654988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.655006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.655233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.655251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.655428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.655445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.655762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.655782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.655943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.655960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.656130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.656148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.656469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.656485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.656665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.656682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.656868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.566 [2024-07-24 19:28:50.656886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.566 qpair failed and we were unable to recover it. 00:28:04.566 [2024-07-24 19:28:50.657122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.657139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.657323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.657341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.657571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.657588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.657825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.657842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.658089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.658106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.658356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.658373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.658678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.658695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.658868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.658886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.659175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.659192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.659295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.659311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.659620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.659637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.659822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.659839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.660071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.660088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.660271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.660288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.660523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.660540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.660771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.660788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.660954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.660972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.661216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.661233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.661397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.661414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.661650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.661668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.661983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.662001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.662222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.662239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.662555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.662572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.662864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.662881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.663188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.663206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.663424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.663441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.663662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.663680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.663917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.663935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.664265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.664282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.664509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.664527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.664816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.664834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.567 qpair failed and we were unable to recover it. 00:28:04.567 [2024-07-24 19:28:50.665001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.567 [2024-07-24 19:28:50.665018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.665259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.665276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.665610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.665628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.665936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.665954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.666202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.666239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.666602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.666638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.666885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.666908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.667151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.667169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.667460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.667478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.667784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.667802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.668103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.668121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.668429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.668446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.668608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.668626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.668739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.668756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.669067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.669084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.669318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.669335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.669644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.669662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.669831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.669853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.670033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.670051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.670335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.670353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.670603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.670621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.670856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.670875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.671110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.671128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.671173] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:28:04.568 [2024-07-24 19:28:50.671217] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.568 [2024-07-24 19:28:50.671438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.671455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.671708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.671739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.671977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.671994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.672096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.672113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.672340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.672357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.672584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.672601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.672834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.672851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.673161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.673179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.673418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.673436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.673677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.673695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.568 [2024-07-24 19:28:50.673919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.568 [2024-07-24 19:28:50.673936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.568 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.674236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.674254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.674422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.674440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.674725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.674742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.675052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.675069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.675256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.675273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.675580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.675597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.675883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.675901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.676188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.676205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.676499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.676516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.676805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.676823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.677062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.677080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.677392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.677409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.677624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.677641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.677807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.677834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.678144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.678162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.678393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.678410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.678576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.678593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.678837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.678855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.679168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.679186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.679403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.679420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.679679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.679696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.679879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.679895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.680191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.680211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.680496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.680513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.680680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.680697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.680926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.680944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.681205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.681225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.681417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.681434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.681668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.681686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.681975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.681993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.682145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.682163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.682414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.682431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.682668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.682686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.682977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.682995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.569 [2024-07-24 19:28:50.683302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.569 [2024-07-24 19:28:50.683320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.569 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.683473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.683490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.683656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.683674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.683915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.683933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.684096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.684114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.684371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.684388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.684556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.684574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.684731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.684748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.684921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.684939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.685083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.685101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.685396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.685413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.685725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.685743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.685977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.685994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.686214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.686231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.686383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.686400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.686633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.686651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.686936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.686954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.687117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.687134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.687451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.687468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.687649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.687666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.687953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.687971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.688085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.688103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.688338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.688356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.688560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.688578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.688809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.688827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.689045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.689062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.689292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.689309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.689526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.689544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.689728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.689748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.689985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.690002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.690222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.690239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.690503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.690520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.690751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.690769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.691055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.691073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.691244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.691262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.570 [2024-07-24 19:28:50.691490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.570 [2024-07-24 19:28:50.691507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.570 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.691823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.691840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.692066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.692083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.692265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.692283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.692513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.692531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.692748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.692766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.692997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.693014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.693265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.693282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.693566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.693584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.693719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.693737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.693958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.693976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.694207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.694225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.694482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.694499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.694763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.694780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.695107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.695125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.695348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.695366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.695544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.695561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.695788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.695806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.695925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.695941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.696227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.696245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.696420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.696438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.696758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.696776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.697034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.697052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.697302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.697320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.697629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.697646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.697897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.697915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.698160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.698178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.698335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.698353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.698583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.698600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.698817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.698834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.699145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.699162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.699494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.699512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.699800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.699817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.700060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.700080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.700366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.571 [2024-07-24 19:28:50.700384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.571 qpair failed and we were unable to recover it. 00:28:04.571 [2024-07-24 19:28:50.700612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.700630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.700742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.700761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.700923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.700940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.701225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.701242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.701392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.701409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.701661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.701679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.701898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.701916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.702220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.702237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.702525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.702543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.702874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.702892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.703012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.703029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.703210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.703228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.703485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.703502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.703808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.703826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.704054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.704072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.704305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.704323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.704558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.704575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.704859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.704877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.705048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.705066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.705249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.705266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.705457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.705475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.705725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.705743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.706029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.706047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.706342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.706360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.706612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.706629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.706874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.706892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.707146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.707163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.707474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.707491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.707666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.707684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 [2024-07-24 19:28:50.707961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.572 [2024-07-24 19:28:50.707979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.572 qpair failed and we were unable to recover it. 00:28:04.572 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.572 [2024-07-24 19:28:50.708228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.708246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.708428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.708445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.708689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.708707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.708932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.708949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.709255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.709273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.709489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.709507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.709757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.709775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.710010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.710028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.710322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.710345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.710682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.710704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.711031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.711049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.711286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.711304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.711613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.711630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.711816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.711835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.712086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.712104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.712372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.712390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.712725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.712743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.712977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.712994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.713231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.713249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.713473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.713491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.713727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.713745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.713910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.713927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.714148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.714165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.714400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.714417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.714724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.714741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.715054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.715071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.715314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.715331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.715556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.715573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.715858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.715876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.716124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.716141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.716397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.716414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.716634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.716651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.716895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.716913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.717149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.717166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.573 [2024-07-24 19:28:50.717481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.573 [2024-07-24 19:28:50.717498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.573 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.717661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.717680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.717807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.717824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.718050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.718068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.718155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.718171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.718344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.718361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.718594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.718612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.718794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.718810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.719069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.719086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.719397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.719414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.719725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.719742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.720027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.720044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.720326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.720344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.720575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.720593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.720892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.720910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.721200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.721217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.721471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.721488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.721667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.721684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.721908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.721926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.722157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.722174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.722324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.722342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.722517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.722534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.722782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.722800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.723088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.723105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.723394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.723411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.723650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.723667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.723951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.723968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.724198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.724216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.724447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.724466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.724698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.724722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.725026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.725043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.725275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.725293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.725452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.725469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.725694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.725712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.726005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.726022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.574 [2024-07-24 19:28:50.726325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.574 [2024-07-24 19:28:50.726343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.574 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.726561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.726579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.726811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.726829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.727119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.727137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.727317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.727334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.727631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.727648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.727826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.727843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.728090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.728108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.728344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.728362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.728612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.728629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.728853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.728872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.729094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.729111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.729415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.729432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.729709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.729730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.729964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.729982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.730248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.730265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.730548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.730566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.730805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.730823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.731070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.731088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.731344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.731362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.731587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.731605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.731779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.731797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.732101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.732119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.732347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.732364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.732658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.732676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.732842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.732860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.733104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.733122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.733357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.733374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.733627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.733645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.733961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.733979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.734199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.734217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.734459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.734477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.734739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.734757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.735073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.735090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.735323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.735341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.735579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.575 [2024-07-24 19:28:50.735596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.575 qpair failed and we were unable to recover it. 00:28:04.575 [2024-07-24 19:28:50.735829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.735847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.736101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.736118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.736340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.736357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.736666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.736683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.736998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.737016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.737257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.737274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.737507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.737524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.737766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.737784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.738093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.738110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.738373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.738391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.738613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.738630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.738918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.738935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.739245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.739262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.739505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.739522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.739827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.739845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.740082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.740100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.740337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.740354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.740581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.740598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.740859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.740876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.741161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.741178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.741341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.741359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.741597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.741615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.741847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.741865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.742160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.742177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.742399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.742417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.742641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.742664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.742971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.742989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.743269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.743286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.743506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.743523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.743855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.743873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.744126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.744143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.744314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.744331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.744581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.744598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.744886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.744903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.745123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.745140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.576 qpair failed and we were unable to recover it. 00:28:04.576 [2024-07-24 19:28:50.745446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.576 [2024-07-24 19:28:50.745463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.745695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.745712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.745957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.745974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.746284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.746302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.746537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.746555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.746843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.746861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.747181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.747198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.747421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.747439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.747697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.747719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.748048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.748066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.748303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.748321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.748542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.748559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.748790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.748807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.749112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.749129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.749359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.749376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.749544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.749562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.749894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.749911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.750246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.750266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.750485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.750503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.750701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.750724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.750953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.750971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.751134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.751151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.751366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.751383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.751689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.751706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.751952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.751969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.752277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.752294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.752533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.752550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.752768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.752786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.753011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.753028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.753336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.753355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.577 qpair failed and we were unable to recover it. 00:28:04.577 [2024-07-24 19:28:50.753589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.577 [2024-07-24 19:28:50.753606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.753926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.753944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.754164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.754181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.754416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.754433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.754709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.754741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.754980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.754998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.755305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.755323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.755545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.755562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.755749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.755767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.756082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.756100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.756331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.756348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.756568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.756585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.756912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.756930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.757189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.757206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.757492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.757511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.757770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.757788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.758017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.758034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.758204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.758222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.758464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.758481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.758700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.758721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.758951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.758969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.759219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.759236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.759525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.759542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.759795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.759813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.759999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.760015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.760295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.760313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.760575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.760592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.760881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.760899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.761071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.761089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.761319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.761336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.761584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.578 [2024-07-24 19:28:50.761621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.761638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.761931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.761949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.762195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.762213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.762526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.762544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.762762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.762781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.578 qpair failed and we were unable to recover it. 00:28:04.578 [2024-07-24 19:28:50.763025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.578 [2024-07-24 19:28:50.763043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.763332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.763350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.763517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.763535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.763724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.763741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.763928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.763946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.764245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.764262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.764547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.764568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.764855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.764874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.765040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.765057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.765338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.765357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.765545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.765562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.765788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.765806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.766077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.766095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.766406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.766425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.766666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.766684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.766913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.766931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.767168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.767187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.767340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.767357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.767670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.767688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.767979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.767997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.579 [2024-07-24 19:28:50.768112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.579 [2024-07-24 19:28:50.768130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.579 qpair failed and we were unable to recover it. 00:28:04.857 [2024-07-24 19:28:50.768335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.857 [2024-07-24 19:28:50.768354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.768574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.768594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.768896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.768916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.769078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.769095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.769304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.769322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.769610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.769628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.769792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.769812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.770060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.770079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.770373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.770393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.770554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.770572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.770803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.770821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.770947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.770964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.771226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.771248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.771591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.771608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.771878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.771896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.772144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.772161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.772448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.772465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.772644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.772661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.772880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.772897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.773129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.773147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.773391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.773409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.773655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.773673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.773983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.774000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.774196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.774213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.774521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.774538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.774756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.774774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.774930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.774947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.775197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.775215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.775410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.775427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.775681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.775698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.775978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.858 [2024-07-24 19:28:50.776022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.858 qpair failed and we were unable to recover it. 00:28:04.858 [2024-07-24 19:28:50.776394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.776432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.776665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.776684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.776979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.776998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.777284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.777301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.777549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.777567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.777879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.777897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.778070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.778088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.778317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.778334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.778581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.778603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.778843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.778861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.779112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.779129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.779364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.779381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.779691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.779708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.780019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.780037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.780313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.780331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.780514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.780532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.780774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.780793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.781079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.781097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.781354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.781372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.781529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.781547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.781775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.781793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.782113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.782130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.782370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.782388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.782676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.782693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.782929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.782946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.783232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.783250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.783584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.783602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.783892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.783910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.784149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.784166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.784450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.784468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.784709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.784732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.785019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.785037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.785358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.785376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.785684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.859 [2024-07-24 19:28:50.785702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.859 qpair failed and we were unable to recover it. 00:28:04.859 [2024-07-24 19:28:50.785950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.785968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.786208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.786228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.786475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.786492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.786738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.786756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.787072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.787089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.787368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.787385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.787619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.787636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.787921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.787939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.788235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.788252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.788486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.788503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.788688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.788706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.788969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.788986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.789293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.789310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.789541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.789559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.789826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.789844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.790136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.790153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.790461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.790478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.790756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.790773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.790888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.790905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.791210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.791227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.791458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.791475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.791736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.791754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.792102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.792119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.792355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.792373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.792537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.792557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.792795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.792811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.793025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.793043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.793264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.793282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.793494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.793514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.793799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.793817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.794030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.794047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.794279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.794296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.794463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.794480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.794720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.794738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.860 [2024-07-24 19:28:50.794981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.860 [2024-07-24 19:28:50.794999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.860 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.795315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.795333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.795497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.795514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.795742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.795760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.795937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.795954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.796216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.796234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.796393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.796411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.796720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.796738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.796998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.797015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.797178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.797196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.797377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.797395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.797631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.797648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.797870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.797888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.798119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.798137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.798470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.798488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.798653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.798671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.798832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.798851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.799089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.799106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.799393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.799411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.799674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.799693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.799931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.799949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.800259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.800280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.800441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.800459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.800696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.800720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.800983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.801002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.801233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.801251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.801606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.801626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.801852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.801872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.802181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.802201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.802364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.802384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.802571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.802590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.802742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.802760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.861 qpair failed and we were unable to recover it. 00:28:04.861 [2024-07-24 19:28:50.802994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.861 [2024-07-24 19:28:50.803012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.803253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.803273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.803504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.803524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.803816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.803835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.804013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.804031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.804203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.804220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.804544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.804563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.804793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.804812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.805054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.805073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.805258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.805276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.805512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.805532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.805712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.805736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.805994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.806014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.806243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.806260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.806584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.806606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.806802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.806819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.807128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.807153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.807374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.807394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.807678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.807696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.807976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.808007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.808246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.808264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.808569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.808587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.808895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.808913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.809245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.809263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.809495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.809512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.809747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.809765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.810007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.810025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.810199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.810216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.810447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.810465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.810771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.810789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.811096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.811113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.811281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.811299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.811514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.811533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.811817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.862 [2024-07-24 19:28:50.811835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.862 qpair failed and we were unable to recover it. 00:28:04.862 [2024-07-24 19:28:50.812020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.812038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.812278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.812295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.812406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.812424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.812593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.812610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.812941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.812959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.813191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.813209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.813441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.813459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.813696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.813718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.814026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.814043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.814210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.814230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.814515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.814533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.814763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.814781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.815092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.815110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.815367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.815385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.815723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.815741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.815979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.815996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.816306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.816324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.816512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.816529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.816819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.816837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.816997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.817014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.817231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.817248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.817465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.817483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.817797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.817815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.818036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.818053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.818355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.818373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.863 qpair failed and we were unable to recover it. 00:28:04.863 [2024-07-24 19:28:50.818629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.863 [2024-07-24 19:28:50.818646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.818943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.818962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.819196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.819214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.819500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.819518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.819698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.819721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.820006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.820024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.820275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.820293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.820511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.820529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.820711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.820733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.820957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.820975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.821145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.821162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.821400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.821417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.821658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.821675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.821896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.821913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.822148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.822166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.822329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.822346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.822567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.822584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.822761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.822779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.823073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.823090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.823377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.823394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.823647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.823664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.823900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.823918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.824201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.824218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.824404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.824421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.824639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.824659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.824899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.824917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.825113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.825130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.825353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.825370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.825680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.825698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.825943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.825961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.826219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.826237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.826474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.826492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.826736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.826753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.827063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.827081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.864 qpair failed and we were unable to recover it. 00:28:04.864 [2024-07-24 19:28:50.827255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.864 [2024-07-24 19:28:50.827272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.827444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.827462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.827770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.827789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.828027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.828045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.828361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.828379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.828634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.828652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.828938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.828956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.829173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.829190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.829500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.829518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.829756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.829774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.829947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.829965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.830274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.830298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.830557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.830578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.830757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.830776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.830997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.831015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.831194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.831211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.831473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.831490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.831734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.831752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.831988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.832006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.832246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.832264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.832428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.832445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.832734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.832754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.832989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.833009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.833253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.833273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.833508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.833528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.833752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.833772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.834015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.834033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.834186] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.865 [2024-07-24 19:28:50.834218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.865 [2024-07-24 19:28:50.834228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.865 [2024-07-24 19:28:50.834237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.865 [2024-07-24 19:28:50.834244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.865 [2024-07-24 19:28:50.834289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.834306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.834464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.834367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:04.865 [2024-07-24 19:28:50.834484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.834480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:04.865 [2024-07-24 19:28:50.834588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:04.865 [2024-07-24 19:28:50.834590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:04.865 [2024-07-24 19:28:50.834713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.834737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.835050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.865 [2024-07-24 19:28:50.835067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.865 qpair failed and we were unable to recover it. 00:28:04.865 [2024-07-24 19:28:50.835330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.835348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.835519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.835537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.835766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.835783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.836025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.836043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.836291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.836309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.836596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.836613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.836855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.836873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.837162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.837180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.837440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.837457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.837697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.837722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.838002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.838021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.838276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.838294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.838512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.838530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.838764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.838782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.839002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.839020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.839247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.839264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.839529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.839547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.839881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.839899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.840084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.840102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.840413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.840430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.840742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.840759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.840947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.840965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.841282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.841300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.841539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.841557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.841817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.841836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.842076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.842094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.842370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.842388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.842630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.842648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.842873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.842891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.843170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.843188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.843376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.843394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.843630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.843648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.866 [2024-07-24 19:28:50.843821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.866 [2024-07-24 19:28:50.843839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.866 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.844124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.844141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.844378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.844396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.844684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.844703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.844949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.844968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.845137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.845154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.845395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.845412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.845659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.845678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.845912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.845931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.846165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.846184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.846493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.846511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.846798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.846816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.846993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.847012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.847231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.847249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.847536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.847554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.847786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.847805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.848092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.848110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.848340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.848362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.848591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.848609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.848790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.848810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.849060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.849079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.849297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.849315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.849601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.849619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.849929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.849948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.850181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.850200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.850510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.850530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.850755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.850775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.850992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.851010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.851233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.851252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.851443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.851462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.851769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.851788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.852023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.852043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.852207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.852224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.867 [2024-07-24 19:28:50.852510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.867 [2024-07-24 19:28:50.852528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.867 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.852759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.852779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.853065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.853085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.853316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.853335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.853580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.853599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.853912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.853932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.854152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.854172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.854350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.854368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.854517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.854535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.854704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.854727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.855020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.855039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.855251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.855301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.855482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.855513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.855815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.855830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.856056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.856069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.856278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.856291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.856499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.856513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.856813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.856826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.857056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.857070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.857284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.857298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.857603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.857616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.857897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.857910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.858160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.858173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.858450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.858464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.858667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.858684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.858854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.858867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.858993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.859006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.859237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.868 [2024-07-24 19:28:50.859249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.868 qpair failed and we were unable to recover it. 00:28:04.868 [2024-07-24 19:28:50.859525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.859539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.859815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.859828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.860126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.860139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.860311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.860325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.860603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.860616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.860912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.860925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.861092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.861106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.861340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.861353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.861576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.861589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.861803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.861817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.862035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.862048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.862323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.862337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.862555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.862568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.862693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.862706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.862883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.862897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.863080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.863093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.863392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.863406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.863613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.863626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.863845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.863858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.864072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.864086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.864352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.864365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.864526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.864539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.864811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.864825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.864936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.864955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.865289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.865307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.865601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.865619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.865928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.865946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.866190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.866208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.866471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.866489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.866743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.866762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.867072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.867090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.867310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.869 [2024-07-24 19:28:50.867329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.869 qpair failed and we were unable to recover it. 00:28:04.869 [2024-07-24 19:28:50.867613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.867632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.867868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.867886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.868070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.868088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.868374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.868393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.868632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.868655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.868971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.868990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.869172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.869190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.869379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.869397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.869556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.869575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.869832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.869851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.870103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.870121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.870344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.870363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.870584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.870602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.870833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.870850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.871183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.871200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.871434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.871452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.871677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.871694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.871949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.871967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.872197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.872214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.872499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.872517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.872850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.872868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.873176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.873193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.873445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.873463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.873751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.873769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.874046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.874064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.874323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.874341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.874589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.874607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.874860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.874878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.875114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.875132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.875460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.875480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.875722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.875742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.876075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.876096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.876262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.876275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.876477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.876492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.870 [2024-07-24 19:28:50.876656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.870 [2024-07-24 19:28:50.876672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.870 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.876974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.876989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.877287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.877301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.877603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.877617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.877863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.877879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.878105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.878119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.878349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.878362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.878688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.878702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.878978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.879011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.879277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.879297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.879529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.879546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.879836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.879854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.880089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.880106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.880336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.880353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.880609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.880627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.880869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.880886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.881065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.881083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.881258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.881276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.881504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.881521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.881747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.881765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.882072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.882090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.882263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.882281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.882510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.882527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.882844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.882861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd554000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.883174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.883189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.883361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.883375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.883594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.883607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.883826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.883840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.884139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.884153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.884400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.884414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.884703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.884720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.871 [2024-07-24 19:28:50.884894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.871 [2024-07-24 19:28:50.884908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.871 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.885182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.885195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.885494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.885507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.885726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.885740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.885964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.885978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.886118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.886131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.886380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.886397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.886630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.886643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.886946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.886960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.887182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.887196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.887423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.887436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.887681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.887695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.887972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.887986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.888290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.888305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.888469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.888482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.888641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.888654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.888957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.888970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.889160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.889174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.889410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.889423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.889643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.889656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.889934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.889947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.890047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.890059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.890292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.890306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.890524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.890537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.890765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.890778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.891073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.891086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.891378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.891392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.891664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.891678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.891899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.891912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.892128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.892141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.892245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.892257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.892489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.892501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.892670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.892683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.892961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.892974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.893227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.893240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.872 qpair failed and we were unable to recover it. 00:28:04.872 [2024-07-24 19:28:50.893395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.872 [2024-07-24 19:28:50.893408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.893684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.893698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.893925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.893939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.894094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.894107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.894332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.894346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.894571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.894584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.894800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.894814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.894980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.894993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.895268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.895281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.895557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.895571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.895722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.895735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.896015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.896030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.896197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.896211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.896435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.896449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.896656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.896669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.896893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.896907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.897122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.897135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.897340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.897353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.897526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.897539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.897748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.897762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.897984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.897998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.898295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.898307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.898471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.898484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.898760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.898774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.899097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.899110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.899343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.899357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.899681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.899694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.873 [2024-07-24 19:28:50.899974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.873 [2024-07-24 19:28:50.899987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.873 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.900144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.900157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.900377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.900390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.900555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.900568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.900741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.900754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.900914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.900927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.901247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.901260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.901537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.901550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.901838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.901851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.902096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.902109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.902284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.902297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.902528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.902541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.902754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.902768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.903054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.903067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.903330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.903342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.903562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.903575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.903745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.903760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.904002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.904015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.904250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.904263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.904491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.904503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.904759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.904773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.905020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.905034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.905255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.905269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.905492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.905505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.905757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.905772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.906006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.906019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.906178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.906191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.906344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.906357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.906657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.906670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.906829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.906842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.907146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.907160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.907452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.907465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.907747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.907760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.907970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.907983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.908157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.874 [2024-07-24 19:28:50.908170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.874 qpair failed and we were unable to recover it. 00:28:04.874 [2024-07-24 19:28:50.908417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.908430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.908664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.908678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.908889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.908902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.909185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.909198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.909417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.909429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.909672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.909686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.909982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.909995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.910238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.910251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.910412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.910426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.910602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.910615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.910830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.910844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.911118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.911131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.911357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.911371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.911524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.911537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.911697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.911710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.911932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.911946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.912222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.912235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.912402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.912415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.912578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.912591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.912889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.912902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.913120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.913134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.913435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.913448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.913692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.913705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.913865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.913879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.914154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.914167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.914317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.914331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.914525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.914538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.914844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.914857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.915098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.915111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.915316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.915332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.915613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.915626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.915852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.875 [2024-07-24 19:28:50.915866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.875 qpair failed and we were unable to recover it. 00:28:04.875 [2024-07-24 19:28:50.916168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.916181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.916358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.916371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.916547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.916560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.916789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.916803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.916996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.917008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.917167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.917180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.917405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.917418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.917701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.917718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.917933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.917946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.918151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.918164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.918406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.918419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.918575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.918588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.918864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.918878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.919151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.919164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.919459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.919472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.919710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.919727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.920029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.920042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.920260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.920273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.920520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.920533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.920754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.920767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.920982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.920995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.921208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.921221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.921497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.921510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.921740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.921753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.922079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.922092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.922310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.922323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.922549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.922563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.922801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.922814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.923088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.923101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.923401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.923414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.923713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.923732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.924019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.924033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.924202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.924215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.924490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.924503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.876 qpair failed and we were unable to recover it. 00:28:04.876 [2024-07-24 19:28:50.924819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.876 [2024-07-24 19:28:50.924833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.924923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.924935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.925228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.925241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.925547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.925563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.925863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.925878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.926117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.926131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.926354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.926367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.926654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.926667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.926838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.926852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.927008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.927021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.927164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.927177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.927384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.927398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.927637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.927649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.927876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.927889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.928111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.928124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.928272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.928285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.928504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.928517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.928830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.928844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.929072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.929085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.929236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.929249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.929523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.929537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.929744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.929758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.930046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.930059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.930212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.930226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.930522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.930535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.930755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.930769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.931044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.931057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.931328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.931341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.931552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.931565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.931833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.931847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.932057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.932070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.877 [2024-07-24 19:28:50.932280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.877 [2024-07-24 19:28:50.932293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.877 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.932569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.932582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.932743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.932756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.933066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.933079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.933364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.933377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.933598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.933612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.933912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.933925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.934152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.934165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.934386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.934399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.934704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.934721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.934962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.934976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.935127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.935140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.935320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.935334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.935571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.935585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.935813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.935827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.936101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.936114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.936298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.936310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.936606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.936619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.936854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.936867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.937089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.937102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.937330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.937343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.937586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.937600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.937894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.937907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.938143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.938156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.938382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.938395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.938695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.938708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.938870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.938884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.939055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.939069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.939359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.939372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.939577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.939590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.939894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.939907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.940060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.940073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.940216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.940229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.940439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.940452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.878 qpair failed and we were unable to recover it. 00:28:04.878 [2024-07-24 19:28:50.940724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.878 [2024-07-24 19:28:50.940737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.940983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.940996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.941273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.941286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.941582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.941595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.941831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.941845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.942075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.942089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.942311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.942324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.942542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.942555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.942800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.942813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.942968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.942981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.943203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.943216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.943440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.943453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.943703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.943720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.944013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.944026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.944237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.944251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.944473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.944486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.944642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.944655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.944933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.944946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.945190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.945205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.945510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.945523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.945735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.945748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.945976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.945989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.946255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.946268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.946498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.946510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.946662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.946675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.946849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.946863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.947067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.947080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.947234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.879 [2024-07-24 19:28:50.947247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.879 qpair failed and we were unable to recover it. 00:28:04.879 [2024-07-24 19:28:50.947399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.947412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.947656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.947669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.947848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.947861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.948181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.948194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.948405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.948418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.948667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.948680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.948888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.948901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.949127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.949140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.949354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.949367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.949643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.949656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.949883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.949897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.950118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.950131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.950375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.950389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.950664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.950677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.950885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.950898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.951172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.951185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.951462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.951475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.951731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.951745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.951971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.951985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.952154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.952167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.952377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.952390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.952541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.952554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.952785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.952799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.953049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.953062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.953216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.953229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.953503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.953516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.953732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.953745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.954055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.954068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.954281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.954294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.954456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.880 [2024-07-24 19:28:50.954469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.880 qpair failed and we were unable to recover it. 00:28:04.880 [2024-07-24 19:28:50.954767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.954783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.955099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.955112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.955335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.955348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.955555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.955568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.955785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.955799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.955943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.955955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.956176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.956189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.956395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.956408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.956629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.956642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.956853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.956867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.957091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.957104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.957265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.957278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.957488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.957500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.957733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.957746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.957957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.957970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.958183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.958196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.958416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.958430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.958706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.958724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.958944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.958957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.959250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.959263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.959566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.959579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.959740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.959753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.960058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.960071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.960301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.960314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.960600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.960613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.960908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.960922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.961205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.961218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.961367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.961380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.961657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.961670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.961934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.961948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.962152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.962165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.962389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.962402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.962652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.962665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.962833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.962847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.963120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.881 [2024-07-24 19:28:50.963133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.881 qpair failed and we were unable to recover it. 00:28:04.881 [2024-07-24 19:28:50.963377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.963390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.963550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.963563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.963781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.963794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.964005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.964019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.964251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.964264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.964547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.964562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.964708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.964725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.964881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.964894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.965171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.965185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.965403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.965415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.965657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.965670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.965839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.965852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.965994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.966007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.966176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.966190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.966510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.966523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.966748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.966761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.967052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.967065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.967366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.967380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.967658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.967671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.967965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.967978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.968218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.968232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.968397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.968410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.968712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.968730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.968972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.968985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.969207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.969220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.969448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.969461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.969757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.969770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.969921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.969934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.970108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.970121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.970349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.970363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.970534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.970547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.970687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.970700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.970931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.970944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.971175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.882 [2024-07-24 19:28:50.971189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.882 qpair failed and we were unable to recover it. 00:28:04.882 [2024-07-24 19:28:50.971417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.971430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.971585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.971598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.971768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.971781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.972022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.972036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.972207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.972220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.972428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.972441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.972605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.972618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.972845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.972859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.973135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.973148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.973392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.973405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.973706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.973732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.973910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.973925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.974077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.974090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.974254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.974268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.974427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.974440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.974734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.974748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.974909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.974922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.975152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.975166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.975321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.975335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.975555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.975569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.975781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.975794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.976026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.976039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.976276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.976289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.976530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.976544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.976841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.976854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.977079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.977093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.977250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.977263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.977563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.977576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.977782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.977796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.978079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.978092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.978247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.978261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.978479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.978492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.883 [2024-07-24 19:28:50.978703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.883 [2024-07-24 19:28:50.978722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.883 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.978953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.978967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.979175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.979189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.979359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.979372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.979591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.979604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.979819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.979832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.980008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.980021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.980264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.980277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.980476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.980490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.980725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.980739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.980837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.980849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.981086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.981099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.981404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.981418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.981643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.981655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.981886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.981900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.982197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.982210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.982488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.982501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.982654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.982667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.982967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.982980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.983173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.983188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.983416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.983429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.983726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.983739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.983949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.983962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.984188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.984201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.984476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.984489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.984768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.984781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.985019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.985032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.985307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.985320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.985569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.985582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.985791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.985804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.884 [2024-07-24 19:28:50.985912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.884 [2024-07-24 19:28:50.985925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.884 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.986199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.986212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.986486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.986498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.986722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.986736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.987000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.987013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.987257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.987270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.987571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.987584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.987758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.987772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.988006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.988019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.988295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.988308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.988468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.988481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.988778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.988791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.988946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.988960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.989209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.989222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.989377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.989390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.989548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.989561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.989773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.989797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.990072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.990085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.990295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.990307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.990515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.990528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.990802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.990815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.991047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.991061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.991214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.991227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.991527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.991540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.991709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.991739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.992015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.992028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.992324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.992337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.992543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.992556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.992732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.992745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.993067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.993082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.993323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.993336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.993511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.993525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.993747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.993760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.994035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.994048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.885 [2024-07-24 19:28:50.994263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.885 [2024-07-24 19:28:50.994276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.885 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.994550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.994563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.994772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.994785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.995060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.995073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.995300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.995313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.995530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.995543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.995708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.995727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.995948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.995961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.996173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.996186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.996485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.996498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.996708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.996733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.996958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.996972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.997137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.997150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.997448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.997461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.997755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.997768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.997975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.997988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.998238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.998251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.998458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.998471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.998690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.998703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.998983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.998996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.999310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.999323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.999596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.999609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:50.999906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:50.999919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:51.000127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:51.000140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:51.000316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:51.000330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:51.000630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:51.000643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:51.000853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:51.000866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:51.001030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:51.001043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:51.001202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:51.001215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:51.001526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:51.001539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:51.001814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:51.001827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:51.002101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:51.002114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:51.002275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:51.002288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:51.002563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.886 [2024-07-24 19:28:51.002576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.886 qpair failed and we were unable to recover it. 00:28:04.886 [2024-07-24 19:28:51.002782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.002795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.003070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.003084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.003320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.003333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.003560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.003573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.003821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.003834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.003946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.003960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.004167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.004180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.004400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.004413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.004503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.004515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.004815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.004828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.005135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.005148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.005456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.005469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.005746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.005760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.005968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.005981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.006225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.006238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.006532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.006545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.006787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.006800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.007121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.007134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.007294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.007307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.007601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.007614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.007907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.007921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.008211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.008224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.008522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.008535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.008815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.008828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.009123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.009136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.009364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.009378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.009675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.009688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.009864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.009877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.010026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.010041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.010249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.010262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.010484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.010497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.010706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.010724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.887 [2024-07-24 19:28:51.011023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.887 [2024-07-24 19:28:51.011036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.887 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.011271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.011284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.011586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.011600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.011852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.011865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.012022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.012035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.012264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.012277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.012574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.012587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.012741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.012754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.012981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.012994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.013155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.013168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.013398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.013411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.013621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.013634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.013955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.013968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.014265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.014278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.014559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.014572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.014872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.014885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.015167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.015180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.015356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.015369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.015664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.015677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.015906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.015919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.016216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.016229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.016316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.016328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.016603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.016616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.016845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.016859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.017156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.017169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.017390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.017404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.017678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.017691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.017907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.017921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.018141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.018154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.018453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.018466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.018768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.018781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.018945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.018959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.019109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.019122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.888 [2024-07-24 19:28:51.019422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.888 [2024-07-24 19:28:51.019435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.888 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.019666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.019679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.019869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.019882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.020042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.020056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.020266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.020279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.020525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.020538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.020747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.020760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.020968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.020982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.021280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.021293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.021498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.021512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.021746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.021759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.022070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.022083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.022235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.022249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.022545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.022558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.022722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.022736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.022961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.022974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.023279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.023291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.023518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.023531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.023856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.023869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.024021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.024034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.024309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.024322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.024550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.024563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.024862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.024875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.025116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.025129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.025445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.025458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.025767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.025781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.025950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.025963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.026186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.026199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.026485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.026498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.889 [2024-07-24 19:28:51.026659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.889 [2024-07-24 19:28:51.026672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.889 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.026827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.026841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.027155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.027168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.027415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.027428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.027703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.027721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.028025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.028039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.028208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.028221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.028365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.028378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.028539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.028553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.028721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.028735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.029012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.029025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.029351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.029364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.029584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.029597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.029889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.029902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.030154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.030169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.030478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.030491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.030797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.030810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.031018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.031031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.031238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.031251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.031426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.031440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.031645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.031658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.032005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.032018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.032172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.032186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.032482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.032495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.032725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.032738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.032901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.032914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.033136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.033149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.033386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.033399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.033626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.033639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.033872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.033886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.034114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.034128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.034266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.034279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.034600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.890 [2024-07-24 19:28:51.034613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.890 qpair failed and we were unable to recover it. 00:28:04.890 [2024-07-24 19:28:51.034842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.034856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.035085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.035098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.035327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.035340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.035623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.035637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.035857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.035871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.036099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.036112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.036338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.036351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.036661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.036674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.036975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.036989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.037150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.037163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.037440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.037454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.037750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.037764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.038064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.038077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.038297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.038310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.038528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.038542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.038720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.038733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.038951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.038964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.039238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.039251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.039459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.039473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.039718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.039732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.039900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.039913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.040148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.040163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.040386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.040399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.040623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.040636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.040778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.040792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.041004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.041017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.041187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.041201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.041496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.041509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.041811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.041825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.042035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.042048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.042209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.042222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.042434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.042448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.042672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.042686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.043024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.043037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.043333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.891 [2024-07-24 19:28:51.043346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.891 qpair failed and we were unable to recover it. 00:28:04.891 [2024-07-24 19:28:51.043592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.043606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.043767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.043780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.044082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.044095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.044315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.044328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.044548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.044561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.044785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.044799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.045074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.045087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.045328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.045341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.045622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.045635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.045948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.045962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.046169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.046182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.046413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.046426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.046732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.046746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.047035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.047049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.047292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.047305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.047515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.047528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.047754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.047768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.047986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.047999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.048141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.048154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.048427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.048440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.048659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.048672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.048882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.048895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.049124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.049138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.049343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.049357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.049563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.049576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.049851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.049865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.050036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.050052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.050154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.050166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.050441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.050454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.050683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.050696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.050957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.050970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.051122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.051135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.051382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.051395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.051682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.051695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.892 [2024-07-24 19:28:51.051984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.892 [2024-07-24 19:28:51.051997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.892 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.052299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.052312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.052475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.052488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.052696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.052709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.052932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.052946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.053163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.053176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.053477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.053490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.053655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.053668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.053968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.053982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.054194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.054207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.054502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.054515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.054791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.054804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.055013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.055026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.055259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.055272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.055498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.055511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.055742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.055755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.056031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.056045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.056251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.056265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.056511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.056524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.056733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.056746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.056921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.056934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.057105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.057118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.057333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.057347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.057507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.057520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.057733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.057747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.058073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.058086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.058383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.058396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.058567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.058580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.058901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.058915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.059138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.059151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.059473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.059487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.059696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.059709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.060011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.060026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.060215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.060228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.893 qpair failed and we were unable to recover it. 00:28:04.893 [2024-07-24 19:28:51.060442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.893 [2024-07-24 19:28:51.060456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.060751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.060764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.061032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.061045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.061306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.061319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.061543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.061557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.061729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.061742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.062038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.062051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.062331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.062344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.062644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.062657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.062938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.062952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.063161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.063174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.063342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.063356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.063586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.063599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.063849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.063863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.064072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.064085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.064298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.064311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.064572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.064585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.064862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.064875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.065021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.065034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.065320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.065334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.065505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.065518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.065737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.065751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.065998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.066012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.066258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.066272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.066499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.066512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.066768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.066782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.067005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.067018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.067248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.067262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.067497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.067510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.894 qpair failed and we were unable to recover it. 00:28:04.894 [2024-07-24 19:28:51.067746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.894 [2024-07-24 19:28:51.067760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.068038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.068051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.068260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.068274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.068449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.068462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.068618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.068632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.068841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.068854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.069090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.069103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.069379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.069392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.069667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.069681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.069962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.069978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.070256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.070269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.070509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.070523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.070751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.070765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.071010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.071023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.071310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.071323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.071479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.071493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.071633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.071647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.071809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.071821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.072120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.072133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.072343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.072356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.072600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.072614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.072795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.072809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.073014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.073027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.073191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.073205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.073450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.073463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.073720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.073733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.073957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.073971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.074197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.074210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.074442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.074456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.074679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.074692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.074917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.074931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.075155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.075168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.075332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.895 [2024-07-24 19:28:51.075346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.895 qpair failed and we were unable to recover it. 00:28:04.895 [2024-07-24 19:28:51.075575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.896 [2024-07-24 19:28:51.075588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.896 qpair failed and we were unable to recover it. 00:28:04.896 [2024-07-24 19:28:51.075770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.896 [2024-07-24 19:28:51.075784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.896 qpair failed and we were unable to recover it. 00:28:04.896 [2024-07-24 19:28:51.075926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.896 [2024-07-24 19:28:51.075939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.896 qpair failed and we were unable to recover it. 00:28:04.896 [2024-07-24 19:28:51.076221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.896 [2024-07-24 19:28:51.076234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.896 qpair failed and we were unable to recover it. 00:28:04.896 [2024-07-24 19:28:51.076397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.896 [2024-07-24 19:28:51.076410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.896 qpair failed and we were unable to recover it. 00:28:04.896 [2024-07-24 19:28:51.076561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.896 [2024-07-24 19:28:51.076575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.896 qpair failed and we were unable to recover it. 00:28:04.896 [2024-07-24 19:28:51.076798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.896 [2024-07-24 19:28:51.076811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.896 qpair failed and we were unable to recover it. 00:28:04.896 [2024-07-24 19:28:51.077030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.896 [2024-07-24 19:28:51.077044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.896 qpair failed and we were unable to recover it. 00:28:04.896 [2024-07-24 19:28:51.077251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.896 [2024-07-24 19:28:51.077265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:04.896 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.077566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.077580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.077685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.077699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.077932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.077946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.078112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.078127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.078354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.078367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.078633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.078646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.078875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.078889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.079136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.079151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.079377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.079390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.079597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.079610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.079853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.079866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.080088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.080101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.080242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.080255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.080482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.080495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.080718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.080732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.080941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.080955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.081175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.081188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.081396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.081410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.081619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.081632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.081798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.081811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.082028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.082041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.082144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.082157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.082368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.082381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.082672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.082685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.082924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.178 [2024-07-24 19:28:51.082937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.178 qpair failed and we were unable to recover it. 00:28:05.178 [2024-07-24 19:28:51.083148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.083161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.083407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.083420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.083584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.083598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.083816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.083830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.084049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.084062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.084237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.084250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.084460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.084474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.084793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.084807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.085034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.085047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.085196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.085209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.085361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.085374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.085543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.085556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.085703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.085729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.085946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.085959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.086235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.086248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.086477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.086490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.086653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.086667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.086900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.086914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.087165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.087178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.087331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.087344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.087497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.087510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.087731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.087744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.087903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.087918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.088147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.088160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.088382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.088396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.088607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.088620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.088772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.088786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.089011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.089024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.089274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.089287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.089384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.089397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.089624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.089637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.089916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.089929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.090085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.090099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.090307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.090321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.179 [2024-07-24 19:28:51.090482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.179 [2024-07-24 19:28:51.090495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.179 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.090705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.090723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.090876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.090889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.091118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.091131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.091360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.091373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.091505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.091519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.091671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.091685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.091981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.091995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.092135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.092148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.092319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.092332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.092630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.092643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.092877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.092891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.093121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.093134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.093451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.093465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.093612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.093626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.093800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.093820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.094038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.094051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.094326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.094339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.094585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.094598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.094807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.094821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.095145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.095159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.095370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.095383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.095679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.095693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.095885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.095897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.096044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.096057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.096353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.096366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.096590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.096603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.096766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.096779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.096926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.096941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.097168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.097181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.097429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.097442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.097590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.097604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.097902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.097916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.098138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.098151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.098235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.098247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.180 [2024-07-24 19:28:51.098523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.180 [2024-07-24 19:28:51.098537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.180 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.098756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.098768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.098988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.099001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.099220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.099233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.099347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.099360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.099631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.099644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.099889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.099904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.100182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.100195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.100404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.100417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.100514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.100526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.100670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.100683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.100935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.100950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.101179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.101192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.101346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.101359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.101567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.101581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.101787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.101800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.102022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.102035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.102182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.102195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.102410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.102424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.102684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.102698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.102915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.102929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.103088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.103102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.103243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.103256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.103464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.103478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.103776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.103789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.104067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.104080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.104234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.104247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.104491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.104504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.104662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.104675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.104891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.104905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.105120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.105135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.105285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.105298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.105519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.105532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.105768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.105784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.105894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.105907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.181 [2024-07-24 19:28:51.106067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.181 [2024-07-24 19:28:51.106082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.181 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.106358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.106371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.106524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.106537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.106706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.106724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.106935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.106949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.107113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.107127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.107336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.107349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.107649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.107662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.107869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.107883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.108039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.108052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.108260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.108274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.108434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.108447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.108747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.108761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.108987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.109000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.109168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.109181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.109405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.109419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.109653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.109666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.109893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.109906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.110114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.110128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.110284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.110298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.110511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.110524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.110685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.110698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.110864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.110878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.111108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.111121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.111294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.111307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.111529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.111543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.111751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.111764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.111936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.111949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.112200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.112213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.112378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.112391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.112599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.112612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.112823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.112836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.113089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.113102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.113267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.113280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.113513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.113528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.182 [2024-07-24 19:28:51.113766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.182 [2024-07-24 19:28:51.113780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.182 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.114062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.114076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.114300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.114313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.114471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.114486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.114647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.114660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.114869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.114883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.115109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.115123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.115335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.115349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.115603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.115618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.115825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.115840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.116081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.116095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.116258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.116271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.116468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.116481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.116778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.116792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.117006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.117020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.117185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.117198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.117349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.117362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.117518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.117532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.117645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.117657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.117799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.117812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.118036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.118049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.118292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.118306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.118393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.118405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.118611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.118624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.118816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.118830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.119048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.119061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.119290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.119304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.119469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.119482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.183 qpair failed and we were unable to recover it. 00:28:05.183 [2024-07-24 19:28:51.119599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.183 [2024-07-24 19:28:51.119611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.119820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.119833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.120082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.120096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.120314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.120327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.120482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.120496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.120748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.120763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.120985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.120998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.121159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.121172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.121271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.121283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.121537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.121551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.121805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.121818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.121985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.121999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.122141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.122155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.122381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.122394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.122536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.122550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.122733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.122747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.122967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.122980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.123150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.123164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.123330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.123343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.123588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.123602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.123766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.123780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.123932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.123945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.124156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.124170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.124380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.124393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.124603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.124616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.124763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.124776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.124927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.124940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.125243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.125256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.125473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.125486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.125631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.125644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.125856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.125869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.126015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.126028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.126236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.126250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.126404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.126417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.126644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.126657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.184 qpair failed and we were unable to recover it. 00:28:05.184 [2024-07-24 19:28:51.126866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.184 [2024-07-24 19:28:51.126881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.127125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.127138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.127290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.127303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.127447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.127460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.127614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.127627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.127806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.127819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.127978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.127991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.128146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.128161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.128310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.128324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.128475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.128488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.128635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.128647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.128876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.128889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.129117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.129130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.129310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.129323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.129543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.129555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.129733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.129746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.130048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.130061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.130313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.130326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.130487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.130499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.130656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.130668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.130926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.130939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.131152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.131165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.131316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.131330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.131515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.131528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.131757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.131770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.131986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.132000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.132153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.132167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.132313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.132326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.132471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.132484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.132630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.132643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.132798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.132812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.132956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.132969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.133185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.133198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.133346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.133359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.133585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.133598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.185 [2024-07-24 19:28:51.133805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.185 [2024-07-24 19:28:51.133823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.185 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.134037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.134050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.134199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.134212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.134360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.134373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.134585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.134598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.134809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.134822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.134964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.134977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.135257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.135271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.135546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.135559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.135698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.135712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.135858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.135871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.136025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.136038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.136138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.136154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.136321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.136335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.136590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.136603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.136762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.136776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.136938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.136951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.137123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.137136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.137354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.137367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.137532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.137545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.137694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.137707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.137928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.137942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.138110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.138123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.138279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.138293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.138404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.138416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.138582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.138594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.138752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.138767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.138933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.138947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.139097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.139110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.139318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.139331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.139540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.139554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.139772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.139787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.140053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.140066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.140205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.140218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.140381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.140394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.186 [2024-07-24 19:28:51.140602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.186 [2024-07-24 19:28:51.140617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.186 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.140774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.140787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.140999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.141013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.141152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.141165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.141326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.141340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.141499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.141512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.141722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.141736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.141949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.141962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.142239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.142252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.142464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.142477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.142683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.142697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.142870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.142883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.143019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.143032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.143199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.143212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.143348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.143362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.143513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.143526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.143704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.143722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.143937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.143953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.144109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.144122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.144329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.144343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.144535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.144549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.144796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.144809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.145019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.145032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.145190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.145203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.145367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.145380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.145627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.145640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.145814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.145828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.145919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.145932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.146161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.146174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.146315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.146329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.146473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.146487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.146640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.146653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.146810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.146823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.146975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.146988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.147197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.147210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.187 [2024-07-24 19:28:51.147444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.187 [2024-07-24 19:28:51.147457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.187 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.147593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.147606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.147767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.147780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.147996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.148009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.148182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.148195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.148415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.148428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.148657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.148670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.148946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.148959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.149099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.149112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.149203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.149216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.149357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.149371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.149527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.149541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.149732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.149746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.149901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.149914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.150089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.150103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.150351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.150364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.150573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.150587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.150799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.150812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.150983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.150996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.151135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.151148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.151358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.151371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.151525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.151538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.151763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.151779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.152001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.152014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.152171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.152184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.152337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.152351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.152504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.152517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.152757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.152771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.153018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.153031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.188 [2024-07-24 19:28:51.153254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.188 [2024-07-24 19:28:51.153267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.188 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.153421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.153434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.153580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.153593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.153747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.153761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.153858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.153871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.154010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.154023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.154251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.154264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.154407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.154420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.154627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.154640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.154866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.154879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.155038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.155051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.155193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.155206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.155377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.155390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.155612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.155626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.155775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.155788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.156028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.156042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.156264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.156277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.156504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.156517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.156673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.156687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.156903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.156916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.157122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.157135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.157293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.157306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.157585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.157598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.157752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.157766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.157970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.157984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.158260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.158273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.158508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.158522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.158739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.158752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.158960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.158973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.159196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.159209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.159435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.159448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.159606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.159619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.159822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.159835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.159983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.159998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.160140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.160154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.189 [2024-07-24 19:28:51.160401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.189 [2024-07-24 19:28:51.160414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.189 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.160507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.160519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.160665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.160679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.160819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.160832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.160974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.160987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.161213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.161226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.161391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.161404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.161567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.161580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.161813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.161826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.161904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.161916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.162173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.162186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.162324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.162337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.162488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.162502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.162653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.162666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.162770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.162783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.163079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.163092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.163289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.163301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.163442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.163455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.163595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.163608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.163878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.163892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.164112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.164125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.164277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.164290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.164503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.164516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.164677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.164691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.164920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.164934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.165091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.165104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.165324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.165337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.165482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.165495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.165793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.165806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.165978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.165991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.166142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.166155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.166452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.166466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.166712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.166727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.166886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.166899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.167197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.167210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.190 [2024-07-24 19:28:51.167386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.190 [2024-07-24 19:28:51.167399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.190 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.167557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.167571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.167726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.167739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.167901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.167916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.168060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.168072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.168398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.168411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.168570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.168583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.168859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.168872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.169037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.169051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.169193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.169206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.169359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.169372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.169582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.169595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.169773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.169786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.169940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.169953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.170193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.170206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.170414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.170427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.170591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.170604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.170744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.170758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.170993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.171007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.171156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.171170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.171311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.171324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.171478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.171491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.171585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.171597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.171843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.171856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.172137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.172149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.172289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.172300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.172506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.172517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.172673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.172685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.172908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.172920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.173061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.173072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.173155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.173167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.173396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.173407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.173629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.173641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.173849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.173861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.174018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.174029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.191 [2024-07-24 19:28:51.174176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.191 [2024-07-24 19:28:51.174187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.191 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.174339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.174350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.174504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.174516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.174789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.174801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.174954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.174965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.175122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.175134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.175289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.175300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.175447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.175459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.175662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.175676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.175909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.175920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.176178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.176190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.176468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.176479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.176857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.176869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.177040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.177051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.177272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.177284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.177581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.177593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.177892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.177904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.178110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.178122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.178278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.178290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.178380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.178392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.178535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.178547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.178772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.178785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.178929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.178942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.179092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.179104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.179324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.179336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.179493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.179505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.179767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.179779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.180084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.180096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.180306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.180319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.180537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.180550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.180699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.180712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.180869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.180883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.181050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.181063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.181224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.181237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.181399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.181412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.192 [2024-07-24 19:28:51.181519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.192 [2024-07-24 19:28:51.181532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.192 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.181694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.181707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.181873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.181886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.182025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.182038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.182254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.182267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.182435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.182448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.182592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.182605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.182896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.182910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.183121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.183134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.183322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.183335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.183637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.183650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.183872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.183885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.184096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.184110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.184278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.184293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.184436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.184449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.184746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.184760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.184948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.184961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.185140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.185153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.185377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.185391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.185619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.185632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.185808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.185821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.186056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.186069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.186223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.186236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.186391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.186405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.186553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.186566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.186773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.186787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.187003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.187017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.187226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.187239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.187398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.187411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.193 [2024-07-24 19:28:51.187551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.193 [2024-07-24 19:28:51.187564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.193 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.187730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.187743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.187951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.187964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.188109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.188124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.188346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.188361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.188507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.188520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.188746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.188759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.188916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.188929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.189167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.189180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.189386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.189399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.189551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.189565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.189743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.189784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.189956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.189975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.190203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.190221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.190377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.190395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.190578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.190596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.190835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.190854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.191026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.191043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.191191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.191209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.191362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.191379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.191597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.191614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.191754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.191772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.191933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.191950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.192210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.192228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.192337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.192358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.192582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.192600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.192711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.192745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.193000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.193014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.193241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.193254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.193374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.193387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.193568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.193581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.193740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.193753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.193844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.193857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.194067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.194080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.194263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.194276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.194 [2024-07-24 19:28:51.194489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.194 [2024-07-24 19:28:51.194503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.194 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.194676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.194689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.194832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.194846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.195027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.195040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.195188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.195201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.195375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.195388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.195549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.195562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.195720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.195733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.195958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.195971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.196181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.196194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.196419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.196431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.196571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.196584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.196823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.196836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.197005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.197018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.197187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.197200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.197342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.197355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.197474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.197487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.197709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.197727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.197872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.197885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.197997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.198009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.198152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.198165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.198396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.198409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.198522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.198535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.198692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.198705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.198861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.198874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.199094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.199108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.199224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.199236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.199399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.199412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.199569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.199583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.199781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.199796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.200006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.200019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.200170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.200183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.200343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.200356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.200637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.200650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.200797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.200810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.195 qpair failed and we were unable to recover it. 00:28:05.195 [2024-07-24 19:28:51.201026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.195 [2024-07-24 19:28:51.201039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.201186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.201199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.201348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.201361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.201568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.201582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.201742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.201756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.201896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.201909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.202084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.202097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.202239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.202251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.202473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.202485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.202693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.202706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.202851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.202864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.203006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.203019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.203250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.203263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.203477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.203489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.203644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.203656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.203952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.203965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.204116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.204128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.204337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.204350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.204490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.204502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.204735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.204748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.204906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.204919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.205063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.205076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.205217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.205230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.205375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.205388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.205541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.205554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.205703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.205722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.205930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.205943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.206094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.206107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.206324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.206338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.206560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.206574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.206808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.206822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.207070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.207083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.207230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.207243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.207382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.207395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.207536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.207551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.196 qpair failed and we were unable to recover it. 00:28:05.196 [2024-07-24 19:28:51.207690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.196 [2024-07-24 19:28:51.207702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.207854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.207867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.208027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.208041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.208179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.208192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.208351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.208365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.208574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.208587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.208695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.208708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.208877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.208891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.209098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.209111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.209323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.209336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.209494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.209507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.209659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.209672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.209820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.209833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.209971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.209984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.210283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.210296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.210447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.210459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.210667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.210680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.210877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.210890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.211098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.211112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.211263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.211275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.211494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.211507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.211658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.211672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.211831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.211844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.211988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.212000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.212140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.212153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.212397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.212410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.212630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.212644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.212798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.212811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.212970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.212983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.213205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.213218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.213425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.213438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.213578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.213591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.213850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.213864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.214094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.214107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.214336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.214349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.197 [2024-07-24 19:28:51.214571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.197 [2024-07-24 19:28:51.214584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.197 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.214806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.214819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.214992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.215005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.215203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.215217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.215326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.215341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.215561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.215574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.215785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.215798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.216015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.216027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.216171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.216184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.216460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.216472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.216684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.216697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.216855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.216868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.217076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.217089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.217387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.217400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.217625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.217638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.217780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.217794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.218002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.218015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.218261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.218274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.218516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.218529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.218784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.218798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.218895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.218907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.219066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.219079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.219231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.219244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.219453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.219467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.219652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.219665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.219913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.219926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.220232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.220246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.220416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.220429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.220591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.220604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.220792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.198 [2024-07-24 19:28:51.220806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.198 qpair failed and we were unable to recover it. 00:28:05.198 [2024-07-24 19:28:51.221080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.221093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.221337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.221350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.221493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.221506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.221723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.221737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.221886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.221900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.222049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.222062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.222211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.222225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.222391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.222404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.222578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.222591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.222800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.222814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.223025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.223038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.223192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.223205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.223348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.223362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.223514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.223527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.223816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.223829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.223971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.223984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.224194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.224207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.224344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.224357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.224623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.224637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.224919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.224932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.225095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.225107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.225322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.225335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.225478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.225491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.225698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.225712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.225941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.225955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.226095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.226109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.226193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.226205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.226363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.226375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.226600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.226614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.226764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.226777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.226987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.227000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.227207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.227220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.227435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.227448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.227660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.227673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.227897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.199 [2024-07-24 19:28:51.227910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.199 qpair failed and we were unable to recover it. 00:28:05.199 [2024-07-24 19:28:51.228088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.228101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.228255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.228268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.228480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.228494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.228727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.228741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.228947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.228960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.229167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.229180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.229395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.229411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.229506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.229518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.229670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.229683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.229846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.229860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.230073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.230086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.230226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.230239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.230390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.230403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.230559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.230572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.230781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.230794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.231004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.231017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.231190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.231203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.231417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.231432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.231636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.231649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.231835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.231849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.232152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.232166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.232323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.232336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.232573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.232586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.232810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.232824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.232977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.232990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.233173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.233186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.233329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.233342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.233558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.233572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.233844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.233857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.234004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.234017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.234295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.234308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.234524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.234538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.234747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.234760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.234909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.234922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-07-24 19:28:51.235198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.200 [2024-07-24 19:28:51.235211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.235365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.235378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.235596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.235609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.235774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.235789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.235937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.235951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.236166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.236180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.236331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.236344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.236671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.236684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.236860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.236874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.237015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.237028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.237197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.237210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.237449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.237462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.237738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.237753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.237899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.237912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.238126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.238139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.238301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.238314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.238540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.238554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.238710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.238730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.238947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.238961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.239190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.239203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.239418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.239431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.239576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.239590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.239750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.239763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.239998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.240011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.240180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.240193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.240402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.240416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.240634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.240648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.240801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.240814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.240988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.241001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.241215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.241228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.241449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.241462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.241622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.241635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.241807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.241821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.241974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.241987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.242135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.242148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.242288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.201 [2024-07-24 19:28:51.242301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-07-24 19:28:51.242463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.242477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.242618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.242631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.242888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.242902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.243049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.243062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.243270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.243283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.243434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.243447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.243674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.243688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.243853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.243866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.244047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.244060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.244285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.244298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.244479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.244492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.244654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.244668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.244874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.244887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.244996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.245010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.245195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.245208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.245307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.245321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.245469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.245484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.245667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.245680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.245828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.245841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.245992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.246005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.246216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.246229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.246438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.246451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.246661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.246674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.246814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.246827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.246985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.246999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.247151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.247164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.247373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.247386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.247540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.247554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.247690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.247704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.247939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.247953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.248216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.248229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.248453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.248466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.248676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.248690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.248915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.248928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.249182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.202 [2024-07-24 19:28:51.249195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-07-24 19:28:51.249409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.249422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.249634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.249647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.249814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.249828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.250035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.250048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.250279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.250292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.250501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.250514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.250686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.250699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.250864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.250877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.251019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.251031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.251192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.251205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.251364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.251377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.251596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.251608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.251775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.251789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.251936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.251949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.252088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.252101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.252287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.252300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.252582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.252595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.252808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.252821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.252986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.252999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.253145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.253159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.253367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.253380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.253587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.253602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.253761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.253774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.253983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.253996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.254220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.254233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.254526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.254539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.254672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.254684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.254982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.254995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.255265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.255278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.255418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.203 [2024-07-24 19:28:51.255431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.203 qpair failed and we were unable to recover it. 00:28:05.203 [2024-07-24 19:28:51.255614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.255627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.255903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.255917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.256122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.256136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.256344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.256357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.256608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.256623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.256784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.256798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.256972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.256985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.257166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.257180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.257327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.257340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.257553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.257566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.257778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.257791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.258034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.258047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.258197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.258210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.258440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.258453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.258661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.258674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.258823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.258836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.259114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.259128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.259412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.259426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.259599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.259612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.259753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.259767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.259974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.259987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.260218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.260231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.260386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.260399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.260676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.260689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.260858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.260872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.261117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.261131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.261281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.261295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.261525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.261538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.261775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.261788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.261949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.261962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.262105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.262118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.262356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.262371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.262536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.262549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.262874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.262888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.263184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.204 [2024-07-24 19:28:51.263198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.204 qpair failed and we were unable to recover it. 00:28:05.204 [2024-07-24 19:28:51.263418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.263431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.263660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.263674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.263892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.263905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.264043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.264056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.264230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.264243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.264461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.264474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.264742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.264756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.264962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.264975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.265143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.265156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.265366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.265379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.265537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.265550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.265786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.265800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.266024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.266038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.266187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.266201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.266477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.266491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.266770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.266783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.266948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.266961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.267114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.267127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.267334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.267347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.267512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.267525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.267761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.267775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.267878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.267891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.268062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.268075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.268308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.268322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.268424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.268438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.268644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.268658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.268897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.268911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.269077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.269090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.269383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.269396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.269674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.269687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.269829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.269843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.270006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.270020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.270158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.270171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.270393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.270407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.270577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.205 [2024-07-24 19:28:51.270590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.205 qpair failed and we were unable to recover it. 00:28:05.205 [2024-07-24 19:28:51.270761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.270773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.270936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.270951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.271094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.271107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.271263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.271277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.271448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.271461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.271711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.271727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.271875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.271889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.272054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.272067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.272280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.272293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.272582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.272595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.272743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.272757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.272900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.272913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.273065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.273079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.273387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.273400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.273547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.273561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.273801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.273814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.273960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.273973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.274126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.274139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.274360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.274373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.274603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.274616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.274825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.274839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.275006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.275019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.275236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.275249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.275395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.275408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.275688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.275702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.275866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.275880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.276087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.276100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.276245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.276258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.276507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.276520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.276740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.276753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.276936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.276949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.277157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.277170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.277447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.277461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.277666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.277679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.277837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.277851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.206 qpair failed and we were unable to recover it. 00:28:05.206 [2024-07-24 19:28:51.278149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.206 [2024-07-24 19:28:51.278162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.278372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.278385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.278481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.278494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.278721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.278735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.278957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.278971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.279180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.279194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.279497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.279512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.279677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.279690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.279938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.279952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.280172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.280185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.280399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.280412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.280581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.280595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.280751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.280764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.280948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.280961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.281190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.281204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.281426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.281439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.281585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.281599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.281878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.281892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.282032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.282045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.282184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.282197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.282426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.282438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.282734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.282748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.282966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.282979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.283274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.283287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.283451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.283464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.283617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.283630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.283906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.283919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.284133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.284147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.284447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.284461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.284614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.284628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.284835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.284849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.284997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.285010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.285224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.285237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.285456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.285469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.285695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.207 [2024-07-24 19:28:51.285708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.207 qpair failed and we were unable to recover it. 00:28:05.207 [2024-07-24 19:28:51.285866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.285880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.286032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.286045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.286183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.286196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.286352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.286366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.286574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.286587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.286753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.286766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.286932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.286946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.287152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.287165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.287336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.287349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.287504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.287517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.287672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.287686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.287933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.287948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.288163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.288176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.288381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.288394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.288615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.288628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.288853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.288866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.289075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.289089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.289247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.289261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.289415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.289428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.289671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.289684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.289912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.289925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.290084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.290096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.290322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.290336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.290544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.290558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.290719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.290733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.290883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.290897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.291113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.291127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.291370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.291383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.291537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.291550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.208 [2024-07-24 19:28:51.291723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.208 [2024-07-24 19:28:51.291736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.208 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.291924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.291937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.292156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.292169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.292387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.292401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.292695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.292709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.292876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.292889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.293042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.293055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.293266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.293278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.293420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.293434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.293688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.293731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.293922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.293941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.294105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.294123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.294382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.294400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.294706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.294728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.294960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.294978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.295206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.295223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.295442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.295460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.295793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.295812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.295980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.295998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.296199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.296217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.296502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.296519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.296640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.296657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.296837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.296854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.297081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.297099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.297385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.297402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.297555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.297571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.297670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.297686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.297944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.297963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.298195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.298212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.298457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.298475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.298652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.298670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.298824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.298842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.299086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.299103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.299340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.299358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.299584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.299602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.209 [2024-07-24 19:28:51.299774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.209 [2024-07-24 19:28:51.299793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.209 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.300078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.300097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.300282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.300300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.300462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.300479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.300635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.300652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.300814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.300831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.301074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.301091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.301314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.301332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.301569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.301587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.301764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.301791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.301976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.301994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.302278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.302295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.302553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.302570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.302740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.302758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.302947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.302963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.303131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.303149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.303486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.303504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.303675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.303693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.303868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.303886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.304149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.304166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.304330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.304347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.304591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.304609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.304897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.304915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.305133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.305150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.305303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.305321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.305503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.305520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.305674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.305691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.305911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.305929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.306163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.306183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.306421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.306439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.306606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.306623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.306790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.306809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.307034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.307052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.307304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.307321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.307549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.307567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.307725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.210 [2024-07-24 19:28:51.307743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.210 qpair failed and we were unable to recover it. 00:28:05.210 [2024-07-24 19:28:51.307910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.307928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.308026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.308043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.308276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.308293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.308482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.308499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.308652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.308670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.308925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.308943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.309067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.309085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.309255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.309273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.309428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.309446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.309674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.309692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.309924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.309942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.310111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.310128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.310374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.310391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.310557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.310575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.310731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.310748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.310924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.310942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.311178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.311196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.311365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.311382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.311607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.311624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.311886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.311906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.312163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.312181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.312344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.312361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.312514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.312531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.312802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.312820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.312974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.312992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.313213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.313231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.313452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.313469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.313706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.313728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.313897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.313915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.314132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.314150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.314315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.314333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.314505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.314522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.314695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.314712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.314886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.314903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.315135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.315152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.211 [2024-07-24 19:28:51.315479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.211 [2024-07-24 19:28:51.315496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.211 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.315740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.315757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.315921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.315939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.316034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.316052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.316289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.316307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.316461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.316478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.316697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.316721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.316873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.316891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.317123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.317140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.317382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.317399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.317620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.317637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.317806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.317823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.317996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.318014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.318166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.318184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.318333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.318350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.318501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.318518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.318671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.318688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.318811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.318829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.318985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.319003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.319230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.319248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.319471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.319489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.319710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.319733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.319953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.319970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.320202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.320219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.320457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.320474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.320697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.320730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.320893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.320911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.321074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.321092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.321405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.321422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.321586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.321604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.321810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.321828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.322007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.322024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.322241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.322259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.322569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.322586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.322752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.322770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.323007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.323025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.212 qpair failed and we were unable to recover it. 00:28:05.212 [2024-07-24 19:28:51.323267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.212 [2024-07-24 19:28:51.323284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.323449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.323467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.323616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.323634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.323873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.323891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.324058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.324075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.324262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.324280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.324451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.324468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.324685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.324702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.324875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.324893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.325041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.325059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.325300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.325318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.325537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.325555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.325807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.325825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.326062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.326080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.326308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.326325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.326480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.326497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.326654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.326673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.326899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.326917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.327146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.327164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.327394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.327411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.327570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.327587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.327752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.327770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.328004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.328021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.328226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.328243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.328461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.328478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.328648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.328665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.328886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.328904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.329149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.329166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.329326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.329344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.329581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.213 [2024-07-24 19:28:51.329599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.213 qpair failed and we were unable to recover it. 00:28:05.213 [2024-07-24 19:28:51.329756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.329775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.329927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.329945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.330230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.330248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.330433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.330450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.330643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.330660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.330814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.330832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.331053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.331071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.331300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.331318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.331475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.331492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.331711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.331733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.331968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.331986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.332237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.332254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.332471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.332488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.332706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.332731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.332897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.332914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.333075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.333092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.333311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.333328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.333477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.333494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.333741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.333759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.333978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.333997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.334322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.334339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.334559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.334576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.334741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.334758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.334921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.334938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.335112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.335129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.335288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.335305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.335550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.335567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.335799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.335817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.336057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.336074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.336226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.336243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.336420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.336437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.336547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.336564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.336853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.336870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.337174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.337191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.337358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.337375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.214 [2024-07-24 19:28:51.337550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.214 [2024-07-24 19:28:51.337567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.214 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.337733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.337751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.337916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.337934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.338099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.338116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.338213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.338230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.338464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.338486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.338638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.338655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.338822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.338840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.339001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.339019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.339174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.339192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.339357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.339374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.339605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.339622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.339850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.339867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.340102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.340119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.340294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.340312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.340477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.340494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.340645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.340663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.340882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.340899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.341055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.341073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.341325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.341361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.341540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.341560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.341733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.341752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.341914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.341931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.342232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.342249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.342469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.342486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.342704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.342729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.343016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.343034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.343263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.343281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.343532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.343549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.343786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.343805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.344035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.344053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.344215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.344233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.344462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.344484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.344730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.344748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.344994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.345011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.345161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.215 [2024-07-24 19:28:51.345179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.215 qpair failed and we were unable to recover it. 00:28:05.215 [2024-07-24 19:28:51.345468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.345486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.345799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.345816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.346081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.346099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.346275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.346293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.346470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.346487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.346779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.346797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.346946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.346964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.347068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.347086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.347315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.347332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.347480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.347497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.347729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.347747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.347966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.347985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.348270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.348287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.348511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.348528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.348681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.348698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.348876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.348894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.349114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.349131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.349302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.349319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.349492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.349509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.349728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.349746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.349898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.349915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.350133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.350151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.350381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.350399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.350579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.350598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.350886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.350904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.351071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.351089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.351337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.351354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.351508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.351526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.351779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.351797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.351956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.351973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.352200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.352217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce41a0 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.352392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.352411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.352636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.352653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.352950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.352969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.353296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-24 19:28:51.353313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-24 19:28:51.353598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.353616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.353863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.353882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.354066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.354084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.354253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.354271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.354383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.354400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.354620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.354638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.354743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.354761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.355001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.355018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.355241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.355259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.355516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.355533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.355750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.355767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.355997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.356015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.356194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.356211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.356392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.356410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.356725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.356743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.356982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.357000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.357176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.357194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.357435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.357453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.357675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.357693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.358004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.358022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.358185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.358202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.358488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.358505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.358747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.358765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.358991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.359009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.359171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.359189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.359415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.359433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.359655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.359673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.359849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.359867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.360124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.360144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.360309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.360326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.360543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.360560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.360801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.360819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.361103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.361121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.361424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.361442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.361664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-24 19:28:51.361682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-24 19:28:51.361868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.361885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.362139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.362157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.362466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.362483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.362654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.362671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.362942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.362960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.363138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.363155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.363323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.363340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.363583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.363601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.363831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.363849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.364160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.364178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.364336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.364353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.364609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.364626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.364858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.364876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.365107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.365124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.365376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.365393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.365625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.365642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.365805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.365823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.366039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.366056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.366255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.366273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.366497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.366515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.366803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.366821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.367040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.367057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.367342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.367359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.367577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.367594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.367859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.367876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.368052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.368069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.368236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.368254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.368430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.368448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.368608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.368626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.368850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-24 19:28:51.368867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-24 19:28:51.369097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.369115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.369265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.369283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.369458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.369475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.369638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.369657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.369878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.369895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.370060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.370077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.370226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.370243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.370529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.370547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.370787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.370805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.370972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.370990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.371224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.371242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.371421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.371438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.371693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.371711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.371981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.371998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.372157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.372174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.372328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.372345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.372566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.372583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.372826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.372845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.372940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.372958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.373122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.373139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.373308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.373325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.373495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.373512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.373864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.373883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.374047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.374064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.374308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.374325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.374608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.374626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.374774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.374792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.374952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.374969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.375120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.375138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.375297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.375315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.375481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.375499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.375725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.375743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.375958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.375976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.376147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.376165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-24 19:28:51.376379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-24 19:28:51.376396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.376632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.376650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.376816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.376833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.377000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.377017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.377252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.377269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.377579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.377597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.377828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.377846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.378007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.378024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.378245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.378263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.378372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.378392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.378623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.378641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.378867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.378885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.379062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.379079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.379321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.379339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.379569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.379587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.379746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.379764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.379988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.380006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.380280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.380298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.380537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.380555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.380647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.380665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.380830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.380848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.381068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.381085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.381414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.381432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.381679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.381697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.381922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.381940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.382093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.382110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.382273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.382290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.382523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.382540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.382794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.382812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.383043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.383061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.383281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.383299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.383536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.383553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.383797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.383814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.383980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.383997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.384309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.384327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.384558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.384575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-24 19:28:51.384801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-24 19:28:51.384819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.385121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.385138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.385361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.385378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.385688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.385706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.385885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.385902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.386086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.386104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.386360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.386378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.386629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.386646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.386826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.386844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.387000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.387017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.387195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.387213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.387378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.387396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.387625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.387642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.387865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.387885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.388109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.388126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.388356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.388373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.388614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.388633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.388867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.388885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.389109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.389127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.389375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.389393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.389612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.389630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.389861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.389879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.390051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.390070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.390236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.390253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.390540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.390558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.390724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.390743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.391004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.391022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.391260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.391278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.391504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.391521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.391754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.391781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.392116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.392134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.392306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.392323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.392471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.392489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.392779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.392797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.393037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.393055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.393214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.393231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.393396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.393414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.393577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.393594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-24 19:28:51.393837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-24 19:28:51.393855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-24 19:28:51.394035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-24 19:28:51.394052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.394236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.394255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.394419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.394437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.394621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.394639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.394858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.394876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.395085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.395103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.395344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.395361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.395518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.395536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.395688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.395706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.395934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.395952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.396282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.396299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.396481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.396499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.396731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.396749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.396844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.396862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.397028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.397048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.397267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.397284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.397512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.397529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.397690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.397708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.397818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.397835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.398054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.398071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.398291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.398308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.398493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.398510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.398816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.398834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.399065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.399082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.399242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.399259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.399548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.494 [2024-07-24 19:28:51.399566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.494 qpair failed and we were unable to recover it. 00:28:05.494 [2024-07-24 19:28:51.399810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.399828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.400002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.400020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.400178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.400196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.400366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.400384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.400706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.400729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.400901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.400919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.401088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.401106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.401274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.401291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.401531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.401548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.401710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.401735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.401900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.401918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.402093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.402111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.402277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.402294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.402444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.402463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.402674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.402691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.403049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.403080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.403301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.403316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.403538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.403552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.403721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.403735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.403889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.403902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.404135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.404148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.404373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.404386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.404593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.404606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.404906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.404920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.405141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.405155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.405315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.405328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.405559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.405572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.405738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.405751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.406028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.406045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.406274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.406288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.406430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.406443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.406765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.406779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.406870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.406884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.407056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.407070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.495 [2024-07-24 19:28:51.407318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.495 [2024-07-24 19:28:51.407331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.495 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.407586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.407600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.407750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.407763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.407984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.407997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.408152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.408166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.408329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.408343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.408503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.408516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.408733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.408745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.409024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.409038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.409177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.409190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.409353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.409366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.409581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.409595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.409751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.409764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.410039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.410052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.410260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.410274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.410426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.410439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.410720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.410733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.410892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.410905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.411141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.411155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.411331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.411344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.411553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.411566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.411793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.411812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.412058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.412076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.412280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.412297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.412477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.412495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.412667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.412685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.412861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.412880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.413052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.413070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.413291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.413309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.413471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.413488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.413786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.413804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.413975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.413992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.414277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.414295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd544000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.414460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.414475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.414626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.414642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.496 [2024-07-24 19:28:51.414878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.496 [2024-07-24 19:28:51.414892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.496 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.415039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.415052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.415355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.415368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.415508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.415521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.415672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.415685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.415834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.415848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.416053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.416067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.416367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.416381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.416521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.416534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.416706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.416722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.416890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.416904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.417049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.417062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.417210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.417223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.417392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.417405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.417572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.417586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.417825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.417839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.418018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.418031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.418170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.418184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.418482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.418496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.418773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.418786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.419054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.419067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.419228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.419242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.419455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.419469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.419613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.419627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.419770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.419783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.419944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.419957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.420173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.420187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.420392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.420406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.420548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.420561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.420808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.420822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.421030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.421043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.421204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.421217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.421539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.421552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.421846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.421860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.422011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.422024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.497 qpair failed and we were unable to recover it. 00:28:05.497 [2024-07-24 19:28:51.422178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.497 [2024-07-24 19:28:51.422191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.422348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.422362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.422570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.422583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.422804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.422818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.423121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.423137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.423318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.423331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.423554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.423568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.423858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.423873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.424020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.424033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.424240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.424253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.424406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.424419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.424718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.424731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.424997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.425010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.425243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.425256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.425487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.425500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.425774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.425787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.426009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.426023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.426254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.426268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.426488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.426501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.426790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.426803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.426956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.426969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.427112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.427125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.427284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.427297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.427536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.427549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.427711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.427729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.427958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.427971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.428127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.428141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.428395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.428409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.428635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.428649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.428824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.428838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.429079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.429092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.429302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.429315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.429535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.429548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.429764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.429777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.430052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.430066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.498 [2024-07-24 19:28:51.430291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.498 [2024-07-24 19:28:51.430304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.498 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.430488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.430502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.430662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.430675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.430831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.430845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.431028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.431041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.431208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.431222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.431387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.431400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.431617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.431630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.431809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.431823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.431974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.431989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.432201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.432214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.432362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.432375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.432582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.432595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.432894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.432908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.433123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.433137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.433361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.433375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.433652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.433666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.433899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.433912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.434160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.434173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.434336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.434349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.434499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.434512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.434663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.434676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.434765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.434779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.434944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.434959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.435164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.435178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.435488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.435501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.435652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.435666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.435827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.435840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.499 qpair failed and we were unable to recover it. 00:28:05.499 [2024-07-24 19:28:51.436099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.499 [2024-07-24 19:28:51.436113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.436350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.436363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.436538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.436551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.436751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.436765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.436918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.436932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.437075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.437089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.437173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.437186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.437419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.437432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.437669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.437683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.437850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.437864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.438140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.438153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.438315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.438329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.438480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.438493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.438712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.438731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.439008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.439021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.439236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.439250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.439418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.439431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.439579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.439594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.439700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.439713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.439924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.439937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.440223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.440236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.440336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.440352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.440634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.440647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.440924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.440938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.441093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.441106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.441245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.441259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.441479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.441492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.441723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.441737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.441898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.441912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.442163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.442176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.442402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.442416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.442627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.442641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.442916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.442929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.443205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.443219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.443398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.500 [2024-07-24 19:28:51.443411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.500 qpair failed and we were unable to recover it. 00:28:05.500 [2024-07-24 19:28:51.443631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.443645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.443791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.443805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.444054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.444067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.444235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.444249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.444421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.444435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.444586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.444600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.444808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.444822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.445032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.445045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.445320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.445334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.445485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.445499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.445657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.445671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.445884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.445897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.446217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.446230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.446391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.446405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.446614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.446627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.446846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.446860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.447017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.447031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.447250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.447263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.447420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.447434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.447648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.447661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.447886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.447900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.448189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.448202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.448356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.448370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.448647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.448660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.448946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.448960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.449187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.449201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.449437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.449451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.449543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.449556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.449719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.449733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.449939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.449952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.450126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.450139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.450310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.450323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.450625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.450638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.450798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.450813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.501 [2024-07-24 19:28:51.450965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.501 [2024-07-24 19:28:51.450978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.501 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.451200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.451213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.451434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.451447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.451614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.451627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.451875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.451888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.452097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.452110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.452287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.452300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.452550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.452564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.452722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.452736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.452961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.452974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.453249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.453263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.453482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.453495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.453792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.453806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.453962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.453976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.454195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.454209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.454423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.454437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.454677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.454690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.454849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.454862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.455145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.455158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.455326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.455343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.455447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.455460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.455668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.455681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.455912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.455925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.456072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.456085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.456176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.456189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.456411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.456424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.456578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.456591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.456752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.456766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.456921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.456934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.457117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.457130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.457271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.457284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.457448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.457461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.457674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.457687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.457872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.457886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.458065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.458079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.502 [2024-07-24 19:28:51.458322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.502 [2024-07-24 19:28:51.458335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.502 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.458544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.458558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.458711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.458730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.459006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.459019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.459342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.459355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.459563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.459577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.459796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.459809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.460092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.460106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.460269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.460283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.460427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.460441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.460654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.460667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.460832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.460846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.461007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.461021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.461327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.461341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.461564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.461577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.461786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.461799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.461955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.461969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.462204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.462217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.462519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.462532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.462682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.462695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.462928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.462942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.463097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.463110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.463335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.463349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.463558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.463571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.463857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.463872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.464129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.464143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.464358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.464371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.464526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.464539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.464762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.464776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.464994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.465008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.465255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.465268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.465488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.465501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.465653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.465666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.465886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.465899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.466216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.466229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.466438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.503 [2024-07-24 19:28:51.466451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.503 qpair failed and we were unable to recover it. 00:28:05.503 [2024-07-24 19:28:51.466694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.466708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.466893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.466906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.467153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.467166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.467339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.467352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.467583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.467596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.467809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.467822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.468050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.468063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.468216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.468229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.468507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.468520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.468744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.468757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.468917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.468930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.469207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.469220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.469430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.469444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.469656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.469670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.469828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.469841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.470027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.470040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.470200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.470213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.470372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.470385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:05.504 [2024-07-24 19:28:51.470594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.470607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:05.504 [2024-07-24 19:28:51.470884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.470897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.471130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.471144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:05.504 [2024-07-24 19:28:51.471303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.471317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:05.504 [2024-07-24 19:28:51.471477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.471491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.471636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.471649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.504 [2024-07-24 19:28:51.471893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.471906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.472133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.472146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.472304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.472318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.504 [2024-07-24 19:28:51.472536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.504 [2024-07-24 19:28:51.472549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.504 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.472770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.472784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.473010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.473023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.473171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.473186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.473400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.473414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.473572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.473586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.473789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.473803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.473959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.473971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.474133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.474147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.474299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.474313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.474534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.474547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.474689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.474703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.474932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.474947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.475259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.475273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.475589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.475602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.475774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.475788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.475950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.475963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.476132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.476146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.476299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.476314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.476410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.476424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.476639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.476654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.476802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.476816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.477038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.477052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.477331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.477344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.477572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.477586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.477800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.477813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.477968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.477981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.478143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.478156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.478296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.478309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.478480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.478494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.478669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.478683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.478854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.478867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.479008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.479022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.479164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.479178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.479351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.479364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.505 [2024-07-24 19:28:51.479517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.505 [2024-07-24 19:28:51.479531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.505 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.479810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.479824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.480009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.480022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.480169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.480182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.480336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.480350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.480600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.480613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.480842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.480857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.481002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.481015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.481192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.481207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.481351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.481364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.481522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.481536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.481685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.481698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.481920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.481933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.482115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.482128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.482407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.482420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.482561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.482575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.482761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.482774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.482931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.482944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.483100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.483114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.483363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.483376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.483544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.483558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.483771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.483785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.483944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.483957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.484104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.484118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.484324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.484338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.484498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.484511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.484744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.484757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.484843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.484857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.485069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.485083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.485238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.485252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.485425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.485440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.485593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.485608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.485780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.485794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.485974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.485989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.486137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.486152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.506 [2024-07-24 19:28:51.486364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.506 [2024-07-24 19:28:51.486378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.506 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.486535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.486549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.486824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.486838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.487033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.487046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.487257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.487271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.487417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.487430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.487573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.487586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.487748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.487762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.487936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.487949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.488156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.488172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.488256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.488269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.488412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.488425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.488576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.488590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.488818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.488832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.488988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.489002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.489214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.489227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.489374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.489388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.489594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.489608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.489884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.489897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.490034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.490047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.490190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.490204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.490352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.490366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.490507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.490520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.490685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.490700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.490830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.490843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.490988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.491001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.491185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.491198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.491358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.491373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.491518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.491531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.491750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.491764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.491982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.491995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.492203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.492217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.492383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.492397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.492580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.492594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.492747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.492761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.507 [2024-07-24 19:28:51.492917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.507 [2024-07-24 19:28:51.492931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.507 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.493141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.493155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.493363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.493376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.493583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.493597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.493875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.493889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.494107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.494120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.494342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.494356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.494595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.494608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.494784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.494798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.494959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.494972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.495118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.495132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.495282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.495296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.495442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.495456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.495735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.495749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.495903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.495921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.496087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.496100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.496261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.496274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.496426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.496439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.496646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.496659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.496828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.496842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.497061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.497075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.497217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.497230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.497381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.497395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.497647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.497662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.497877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.497891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.498122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.498135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.498304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.498317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.498464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.498478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.498635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.498648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.498801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.498814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.498965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.498978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.499188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.499202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.499423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.499437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.499583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.499597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.499766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.499779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.508 [2024-07-24 19:28:51.500005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.508 [2024-07-24 19:28:51.500019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.508 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.500262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.500275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.500415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.500429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.500666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.500680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.500835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.500848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.500995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.501009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.501238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.501251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.501460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.501473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.501620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.501634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.501859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.501873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.502040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.502054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.502337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.502350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.502504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.502518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.502750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.502764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.502916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.502931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.503087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.503101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.503256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.503269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.503483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.503497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.503645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.503659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.503823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.503838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.503983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.503996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.504151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.504165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.504308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.504322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.504599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.504613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.504772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.504785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.504928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.504942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.505088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.505102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.505253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.505267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.505403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.505417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.505577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.509 [2024-07-24 19:28:51.505590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.509 qpair failed and we were unable to recover it. 00:28:05.509 [2024-07-24 19:28:51.505735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.505749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.505908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.505921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.506060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.506073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.506225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.506239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.506386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.506399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.506547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.506561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.506704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.506730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.506889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.506902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.507117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.507131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.507273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.507286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.507427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.507440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.507596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.507610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.507751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.507765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.507958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.507971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.508111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.508125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.508293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.508307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.508482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.508495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.508652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.508666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.508809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.508823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.508989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.509002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.509223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.509237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.509399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.509412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.509570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.509584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.509796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.509817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.509973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.509986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.510156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.510169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.510387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.510401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.510540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.510554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.510701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.510720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.510861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.510877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.511094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.511108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.511359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.511373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.511657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.511671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.511905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.511919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.512097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.510 [2024-07-24 19:28:51.512111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.510 qpair failed and we were unable to recover it. 00:28:05.510 [2024-07-24 19:28:51.512281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.512295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.512645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.512658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.512891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.512905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.513080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.513093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.513326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.513339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.513555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.513570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.513720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.513734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.513913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.513927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.514138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.514152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.514334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.514347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.514571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.514585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.514919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.514933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.515106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.515119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.515291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.515305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.515536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.515549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.515712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.515728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.515912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.515926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.516088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.516102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.516378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.511 [2024-07-24 19:28:51.516393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.516568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.516581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.516793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.516810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b9 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:05.511 0 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.516991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.517004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.511 [2024-07-24 19:28:51.517186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.517203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.517384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.517398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.511 [2024-07-24 19:28:51.517554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.517568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.517896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.517910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.518064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.518077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.518231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.518245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.518480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.518493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.518776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.518789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.518957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.518971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.519112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.519124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.519285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.519301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.511 [2024-07-24 19:28:51.519640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.511 [2024-07-24 19:28:51.519653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.511 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.519941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.519955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.520129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.520142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.520314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.520328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.520554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.520568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.520798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.520812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.521009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.521023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.521240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.521254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.521595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.521609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.521944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.521957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.522207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.522221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.522539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.522553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.522733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.522746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.522924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.522939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.523173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.523187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.523530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.523544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.523852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.523866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.524029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.524043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.524235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.524249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.524550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.524564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.524807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.524821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.525053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.525067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.525299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.525313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.525529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.525543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.525786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.525801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.526028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.526042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.526218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.526232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.526566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.526580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.526908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.526925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.527074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.527088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.527300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.527314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.527637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.527652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.527943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.527959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.528171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.528185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.528416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.512 [2024-07-24 19:28:51.528431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.512 qpair failed and we were unable to recover it. 00:28:05.512 [2024-07-24 19:28:51.528735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.528750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.528934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.528948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.529227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.529243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.529557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.529574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.529853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.529875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.530062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.530077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.530301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.530315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.530565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.530582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.530751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.530765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.531001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.531016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.531249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.531265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.531578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.531592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.531817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.531832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.532102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.532118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.532371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.532386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.532605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.532620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.532934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.532949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.533144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.533157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.533346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.533359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.533630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.533644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.533861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.533874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.534052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.534066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.534225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.534238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.534530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.534543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.534723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.534737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 Malloc0 00:28:05.513 [2024-07-24 19:28:51.534963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.534977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.535280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.535293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.535540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.535553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.513 [2024-07-24 19:28:51.535757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.535772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.536065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.536078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.536260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.536273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.513 [2024-07-24 19:28:51.536557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.536570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.513 [2024-07-24 19:28:51.536851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.536865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.513 qpair failed and we were unable to recover it. 00:28:05.513 [2024-07-24 19:28:51.537120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.513 [2024-07-24 19:28:51.537134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.537369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.537382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.537679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.537692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.537994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.538008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.538181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.538195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.538452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.538465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.538672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.538686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.538925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.538938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.539169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.539182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.539421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.539435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.539645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.539658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.539965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.539979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.540149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.540162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.540471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.540484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.540759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.540773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.541008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.541021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.541296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.541309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.541646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.541659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.541889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.541902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.542080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.542094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.542312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.542325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.542442] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.514 [2024-07-24 19:28:51.542532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.542545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.542846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.542859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.543128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.543141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.543468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.543481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.543756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.543770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.544009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.544022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.544257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.514 [2024-07-24 19:28:51.544272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.514 qpair failed and we were unable to recover it. 00:28:05.514 [2024-07-24 19:28:51.544430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.544444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.544743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.544757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.544985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.544999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.545304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.545317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.545461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.545474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.545803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.545816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.546144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.546158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.546332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.546347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.546601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.546615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.546896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.546909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.547126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.547140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.547366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.547379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.547656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.547669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.547896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.547909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.548194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.548207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.548497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.548510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.548763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.548777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.548963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.548976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.549156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.549170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.549393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.549406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.549546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.549560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.549771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.549787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.550087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.550100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.550329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.550342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.550586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.550599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.550839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.550852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.551083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.551096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.515 [2024-07-24 19:28:51.551244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.551258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.551559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.551573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.551860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.551875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.515 [2024-07-24 19:28:51.552106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.552120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.515 [2024-07-24 19:28:51.552328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.552342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.552586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.552599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.515 qpair failed and we were unable to recover it. 00:28:05.515 [2024-07-24 19:28:51.552829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.515 [2024-07-24 19:28:51.552842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.553073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.553086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.553327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.553340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.553621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.553635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.553834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.553848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.554127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.554141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.554319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.554333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.554624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.554637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.554939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.554952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.555166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.555179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.555395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.555409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.555651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.555664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.555966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.555980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.556151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.556165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.556448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.556461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.556751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.556764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.557040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.557053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.557329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.557343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.557666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.557679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.557934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.557947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.558194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.558207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.558385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.558398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.558623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.558637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.558947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.558960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.559133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.559146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.516 [2024-07-24 19:28:51.559309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.559323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.559543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.559557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:05.516 [2024-07-24 19:28:51.559763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.559776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.516 [2024-07-24 19:28:51.559955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.559969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.516 [2024-07-24 19:28:51.560290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.560304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.560534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.560548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.560844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.560857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.561154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.561168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.561329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.516 [2024-07-24 19:28:51.561342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.516 qpair failed and we were unable to recover it. 00:28:05.516 [2024-07-24 19:28:51.561582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.561595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.561923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.561937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.562170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.562184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.562341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.562354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.562612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.562626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.562943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.562956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.563109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.563122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.563337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.563351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.563556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.563569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.563793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.563807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.564087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.564101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.564328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.564341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.564593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.564606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.564910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.564923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.565144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.565158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.565387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.565401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.565691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.565706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.565943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.565958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.566134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.566148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.566398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.566411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.566709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.566725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.566961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.566974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.567199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.567213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.517 [2024-07-24 19:28:51.567488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.567502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.517 [2024-07-24 19:28:51.567663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.567677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.517 [2024-07-24 19:28:51.567956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.567970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.568140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.568154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.517 [2024-07-24 19:28:51.568327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.568343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.568576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.568590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.568831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.568845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.569142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.569156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.569313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.569327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.569606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.517 [2024-07-24 19:28:51.569620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.517 qpair failed and we were unable to recover it. 00:28:05.517 [2024-07-24 19:28:51.569910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.518 [2024-07-24 19:28:51.569924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 [2024-07-24 19:28:51.570203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.518 [2024-07-24 19:28:51.570216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 [2024-07-24 19:28:51.570510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.518 [2024-07-24 19:28:51.570523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd54c000b90 with addr=10.0.0.2, port=4420 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 [2024-07-24 19:28:51.570693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.518 [2024-07-24 19:28:51.573028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.518 [2024-07-24 19:28:51.573125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.518 [2024-07-24 19:28:51.573149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.518 [2024-07-24 19:28:51.573161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.518 [2024-07-24 19:28:51.573170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.518 [2024-07-24 19:28:51.573195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.518 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:05.518 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.518 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.518 [2024-07-24 19:28:51.583010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.518 [2024-07-24 19:28:51.583095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.518 [2024-07-24 19:28:51.583118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.518 [2024-07-24 19:28:51.583129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.518 [2024-07-24 19:28:51.583138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.518 [2024-07-24 19:28:51.583158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.518 19:28:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1688339 00:28:05.518 [2024-07-24 19:28:51.592987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.518 [2024-07-24 19:28:51.593068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.518 [2024-07-24 19:28:51.593088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.518 [2024-07-24 19:28:51.593099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.518 [2024-07-24 19:28:51.593108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.518 [2024-07-24 19:28:51.593128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 [2024-07-24 19:28:51.602951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.518 [2024-07-24 19:28:51.603038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.518 [2024-07-24 19:28:51.603057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.518 [2024-07-24 19:28:51.603067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.518 [2024-07-24 19:28:51.603076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.518 [2024-07-24 19:28:51.603096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 [2024-07-24 19:28:51.613020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.518 [2024-07-24 19:28:51.613133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.518 [2024-07-24 19:28:51.613151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.518 [2024-07-24 19:28:51.613162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.518 [2024-07-24 19:28:51.613171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.518 [2024-07-24 19:28:51.613191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 [2024-07-24 19:28:51.623024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.518 [2024-07-24 19:28:51.623102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.518 [2024-07-24 19:28:51.623120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.518 [2024-07-24 19:28:51.623133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.518 [2024-07-24 19:28:51.623142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.518 [2024-07-24 19:28:51.623162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 [2024-07-24 19:28:51.632967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.518 [2024-07-24 19:28:51.633045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.518 [2024-07-24 19:28:51.633064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.518 [2024-07-24 19:28:51.633074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.518 [2024-07-24 19:28:51.633084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.518 [2024-07-24 19:28:51.633103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 [2024-07-24 19:28:51.643033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.518 [2024-07-24 19:28:51.643112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.518 [2024-07-24 19:28:51.643130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.518 [2024-07-24 19:28:51.643141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.518 [2024-07-24 19:28:51.643150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.518 [2024-07-24 19:28:51.643169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 [2024-07-24 19:28:51.653076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.518 [2024-07-24 19:28:51.653172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.518 [2024-07-24 19:28:51.653190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.518 [2024-07-24 19:28:51.653200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.518 [2024-07-24 19:28:51.653210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.518 [2024-07-24 19:28:51.653230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 [2024-07-24 19:28:51.663098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.518 [2024-07-24 19:28:51.663218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.518 [2024-07-24 19:28:51.663236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.518 [2024-07-24 19:28:51.663247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.518 [2024-07-24 19:28:51.663256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.518 [2024-07-24 19:28:51.663276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.518 qpair failed and we were unable to recover it. 00:28:05.518 [2024-07-24 19:28:51.673202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.518 [2024-07-24 19:28:51.673277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.518 [2024-07-24 19:28:51.673295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.519 [2024-07-24 19:28:51.673305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.519 [2024-07-24 19:28:51.673313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.519 [2024-07-24 19:28:51.673333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.519 qpair failed and we were unable to recover it. 00:28:05.519 [2024-07-24 19:28:51.683137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.519 [2024-07-24 19:28:51.683223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.519 [2024-07-24 19:28:51.683242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.519 [2024-07-24 19:28:51.683252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.519 [2024-07-24 19:28:51.683261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.519 [2024-07-24 19:28:51.683280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.519 qpair failed and we were unable to recover it. 00:28:05.519 [2024-07-24 19:28:51.693155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.519 [2024-07-24 19:28:51.693240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.519 [2024-07-24 19:28:51.693259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.519 [2024-07-24 19:28:51.693270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.519 [2024-07-24 19:28:51.693280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.519 [2024-07-24 19:28:51.693299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.519 qpair failed and we were unable to recover it. 00:28:05.519 [2024-07-24 19:28:51.703220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.519 [2024-07-24 19:28:51.703299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.519 [2024-07-24 19:28:51.703317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.519 [2024-07-24 19:28:51.703327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.519 [2024-07-24 19:28:51.703336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.519 [2024-07-24 19:28:51.703354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.519 qpair failed and we were unable to recover it. 00:28:05.519 [2024-07-24 19:28:51.713276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.519 [2024-07-24 19:28:51.713349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.519 [2024-07-24 19:28:51.713368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.519 [2024-07-24 19:28:51.713379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.519 [2024-07-24 19:28:51.713387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.519 [2024-07-24 19:28:51.713405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.519 qpair failed and we were unable to recover it. 00:28:05.780 [2024-07-24 19:28:51.723257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.780 [2024-07-24 19:28:51.723409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.780 [2024-07-24 19:28:51.723428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.780 [2024-07-24 19:28:51.723437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.780 [2024-07-24 19:28:51.723446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.780 [2024-07-24 19:28:51.723465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.780 qpair failed and we were unable to recover it. 00:28:05.780 [2024-07-24 19:28:51.733315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.780 [2024-07-24 19:28:51.733387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.780 [2024-07-24 19:28:51.733405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.780 [2024-07-24 19:28:51.733415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.780 [2024-07-24 19:28:51.733423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.780 [2024-07-24 19:28:51.733441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.780 qpair failed and we were unable to recover it. 00:28:05.780 [2024-07-24 19:28:51.743355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.780 [2024-07-24 19:28:51.743470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.780 [2024-07-24 19:28:51.743489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.780 [2024-07-24 19:28:51.743498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.780 [2024-07-24 19:28:51.743507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.780 [2024-07-24 19:28:51.743526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.780 qpair failed and we were unable to recover it. 00:28:05.780 [2024-07-24 19:28:51.753376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.780 [2024-07-24 19:28:51.753450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.780 [2024-07-24 19:28:51.753469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.780 [2024-07-24 19:28:51.753479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.780 [2024-07-24 19:28:51.753488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.780 [2024-07-24 19:28:51.753510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.780 qpair failed and we were unable to recover it. 00:28:05.780 [2024-07-24 19:28:51.763365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.780 [2024-07-24 19:28:51.763482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.780 [2024-07-24 19:28:51.763500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.780 [2024-07-24 19:28:51.763510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.780 [2024-07-24 19:28:51.763519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.780 [2024-07-24 19:28:51.763538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.780 qpair failed and we were unable to recover it. 00:28:05.780 [2024-07-24 19:28:51.773422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.780 [2024-07-24 19:28:51.773507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.780 [2024-07-24 19:28:51.773524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.780 [2024-07-24 19:28:51.773534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.780 [2024-07-24 19:28:51.773542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.780 [2024-07-24 19:28:51.773561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.780 qpair failed and we were unable to recover it. 00:28:05.780 [2024-07-24 19:28:51.783467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.780 [2024-07-24 19:28:51.783544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.780 [2024-07-24 19:28:51.783561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.780 [2024-07-24 19:28:51.783571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.780 [2024-07-24 19:28:51.783580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.780 [2024-07-24 19:28:51.783598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.780 qpair failed and we were unable to recover it. 00:28:05.780 [2024-07-24 19:28:51.793485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.780 [2024-07-24 19:28:51.793629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.780 [2024-07-24 19:28:51.793646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.780 [2024-07-24 19:28:51.793655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.780 [2024-07-24 19:28:51.793664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.780 [2024-07-24 19:28:51.793682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.780 qpair failed and we were unable to recover it. 00:28:05.780 [2024-07-24 19:28:51.803486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.780 [2024-07-24 19:28:51.803562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.780 [2024-07-24 19:28:51.803584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.780 [2024-07-24 19:28:51.803594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.780 [2024-07-24 19:28:51.803603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.780 [2024-07-24 19:28:51.803621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.780 qpair failed and we were unable to recover it. 00:28:05.780 [2024-07-24 19:28:51.813560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.780 [2024-07-24 19:28:51.813675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.780 [2024-07-24 19:28:51.813693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.780 [2024-07-24 19:28:51.813703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.780 [2024-07-24 19:28:51.813712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.780 [2024-07-24 19:28:51.813733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.780 qpair failed and we were unable to recover it. 00:28:05.780 [2024-07-24 19:28:51.823580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.780 [2024-07-24 19:28:51.823653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.780 [2024-07-24 19:28:51.823671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.823681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.823690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.781 [2024-07-24 19:28:51.823707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.781 qpair failed and we were unable to recover it. 00:28:05.781 [2024-07-24 19:28:51.833623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.781 [2024-07-24 19:28:51.833711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.781 [2024-07-24 19:28:51.833731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.833741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.833750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.781 [2024-07-24 19:28:51.833768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.781 qpair failed and we were unable to recover it. 00:28:05.781 [2024-07-24 19:28:51.843658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.781 [2024-07-24 19:28:51.843754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.781 [2024-07-24 19:28:51.843771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.843781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.843792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.781 [2024-07-24 19:28:51.843810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.781 qpair failed and we were unable to recover it. 00:28:05.781 [2024-07-24 19:28:51.853710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.781 [2024-07-24 19:28:51.853789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.781 [2024-07-24 19:28:51.853807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.853816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.853825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.781 [2024-07-24 19:28:51.853843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.781 qpair failed and we were unable to recover it. 00:28:05.781 [2024-07-24 19:28:51.863831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.781 [2024-07-24 19:28:51.863978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.781 [2024-07-24 19:28:51.863997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.864006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.864015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.781 [2024-07-24 19:28:51.864035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.781 qpair failed and we were unable to recover it. 00:28:05.781 [2024-07-24 19:28:51.873772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.781 [2024-07-24 19:28:51.873852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.781 [2024-07-24 19:28:51.873870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.873880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.873888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.781 [2024-07-24 19:28:51.873906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.781 qpair failed and we were unable to recover it. 00:28:05.781 [2024-07-24 19:28:51.883687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.781 [2024-07-24 19:28:51.883850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.781 [2024-07-24 19:28:51.883869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.883878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.883887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.781 [2024-07-24 19:28:51.883906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.781 qpair failed and we were unable to recover it. 00:28:05.781 [2024-07-24 19:28:51.893721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.781 [2024-07-24 19:28:51.893804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.781 [2024-07-24 19:28:51.893821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.893831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.893840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.781 [2024-07-24 19:28:51.893858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.781 qpair failed and we were unable to recover it. 00:28:05.781 [2024-07-24 19:28:51.903752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.781 [2024-07-24 19:28:51.903829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.781 [2024-07-24 19:28:51.903845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.903855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.903864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.781 [2024-07-24 19:28:51.903882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.781 qpair failed and we were unable to recover it. 00:28:05.781 [2024-07-24 19:28:51.913796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.781 [2024-07-24 19:28:51.913865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.781 [2024-07-24 19:28:51.913882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.913892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.913901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.781 [2024-07-24 19:28:51.913919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.781 qpair failed and we were unable to recover it. 00:28:05.781 [2024-07-24 19:28:51.923809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.781 [2024-07-24 19:28:51.923885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.781 [2024-07-24 19:28:51.923902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.923912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.923920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.781 [2024-07-24 19:28:51.923938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.781 qpair failed and we were unable to recover it. 00:28:05.781 [2024-07-24 19:28:51.933773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.781 [2024-07-24 19:28:51.933918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.781 [2024-07-24 19:28:51.933936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.933945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.933957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.781 [2024-07-24 19:28:51.933976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.781 qpair failed and we were unable to recover it. 00:28:05.781 [2024-07-24 19:28:51.943877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.781 [2024-07-24 19:28:51.943971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.781 [2024-07-24 19:28:51.943988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.781 [2024-07-24 19:28:51.943997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.781 [2024-07-24 19:28:51.944006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.782 [2024-07-24 19:28:51.944025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.782 qpair failed and we were unable to recover it. 00:28:05.782 [2024-07-24 19:28:51.953904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.782 [2024-07-24 19:28:51.953981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.782 [2024-07-24 19:28:51.953998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.782 [2024-07-24 19:28:51.954008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.782 [2024-07-24 19:28:51.954016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.782 [2024-07-24 19:28:51.954034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.782 qpair failed and we were unable to recover it. 00:28:05.782 [2024-07-24 19:28:51.963917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.782 [2024-07-24 19:28:51.963993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.782 [2024-07-24 19:28:51.964010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.782 [2024-07-24 19:28:51.964020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.782 [2024-07-24 19:28:51.964029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.782 [2024-07-24 19:28:51.964047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.782 qpair failed and we were unable to recover it. 00:28:05.782 [2024-07-24 19:28:51.973942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.782 [2024-07-24 19:28:51.974016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.782 [2024-07-24 19:28:51.974033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.782 [2024-07-24 19:28:51.974043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.782 [2024-07-24 19:28:51.974051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.782 [2024-07-24 19:28:51.974070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.782 qpair failed and we were unable to recover it. 00:28:05.782 [2024-07-24 19:28:51.983998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.782 [2024-07-24 19:28:51.984068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.782 [2024-07-24 19:28:51.984086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.782 [2024-07-24 19:28:51.984096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.782 [2024-07-24 19:28:51.984104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.782 [2024-07-24 19:28:51.984122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.782 qpair failed and we were unable to recover it. 00:28:05.782 [2024-07-24 19:28:51.994012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.782 [2024-07-24 19:28:51.994094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.782 [2024-07-24 19:28:51.994111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.782 [2024-07-24 19:28:51.994122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.782 [2024-07-24 19:28:51.994130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.782 [2024-07-24 19:28:51.994148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.782 qpair failed and we were unable to recover it. 00:28:05.782 [2024-07-24 19:28:52.004034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.782 [2024-07-24 19:28:52.004112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.782 [2024-07-24 19:28:52.004129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.782 [2024-07-24 19:28:52.004139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.782 [2024-07-24 19:28:52.004148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.782 [2024-07-24 19:28:52.004166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.782 qpair failed and we were unable to recover it. 00:28:05.782 [2024-07-24 19:28:52.014066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.782 [2024-07-24 19:28:52.014145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.782 [2024-07-24 19:28:52.014162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.782 [2024-07-24 19:28:52.014172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.782 [2024-07-24 19:28:52.014181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:05.782 [2024-07-24 19:28:52.014199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.782 qpair failed and we were unable to recover it. 00:28:06.043 [2024-07-24 19:28:52.024099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.043 [2024-07-24 19:28:52.024171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.043 [2024-07-24 19:28:52.024189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.043 [2024-07-24 19:28:52.024202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.043 [2024-07-24 19:28:52.024211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.043 [2024-07-24 19:28:52.024228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-07-24 19:28:52.034118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.043 [2024-07-24 19:28:52.034261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.043 [2024-07-24 19:28:52.034279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.043 [2024-07-24 19:28:52.034289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.043 [2024-07-24 19:28:52.034298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.043 [2024-07-24 19:28:52.034317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-07-24 19:28:52.044197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.043 [2024-07-24 19:28:52.044276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.043 [2024-07-24 19:28:52.044293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.043 [2024-07-24 19:28:52.044303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.043 [2024-07-24 19:28:52.044312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.043 [2024-07-24 19:28:52.044329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-07-24 19:28:52.054175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.043 [2024-07-24 19:28:52.054252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.043 [2024-07-24 19:28:52.054269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.043 [2024-07-24 19:28:52.054280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.043 [2024-07-24 19:28:52.054288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.043 [2024-07-24 19:28:52.054306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-07-24 19:28:52.064260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.043 [2024-07-24 19:28:52.064336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.043 [2024-07-24 19:28:52.064353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.043 [2024-07-24 19:28:52.064363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.043 [2024-07-24 19:28:52.064372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.043 [2024-07-24 19:28:52.064390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-07-24 19:28:52.074218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.043 [2024-07-24 19:28:52.074295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.043 [2024-07-24 19:28:52.074312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.043 [2024-07-24 19:28:52.074322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.043 [2024-07-24 19:28:52.074331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.043 [2024-07-24 19:28:52.074348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-07-24 19:28:52.084259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.043 [2024-07-24 19:28:52.084338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.043 [2024-07-24 19:28:52.084355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.043 [2024-07-24 19:28:52.084364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.043 [2024-07-24 19:28:52.084373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.043 [2024-07-24 19:28:52.084391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-07-24 19:28:52.094316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.043 [2024-07-24 19:28:52.094391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.094409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.094418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.044 [2024-07-24 19:28:52.094427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.044 [2024-07-24 19:28:52.094446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-07-24 19:28:52.104326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.044 [2024-07-24 19:28:52.104404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.104421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.104431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.044 [2024-07-24 19:28:52.104440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.044 [2024-07-24 19:28:52.104458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-07-24 19:28:52.114363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.044 [2024-07-24 19:28:52.114477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.114499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.114509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.044 [2024-07-24 19:28:52.114518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.044 [2024-07-24 19:28:52.114537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-07-24 19:28:52.124370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.044 [2024-07-24 19:28:52.124448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.124465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.124475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.044 [2024-07-24 19:28:52.124483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.044 [2024-07-24 19:28:52.124501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-07-24 19:28:52.134457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.044 [2024-07-24 19:28:52.134569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.134586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.134596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.044 [2024-07-24 19:28:52.134605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.044 [2024-07-24 19:28:52.134624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-07-24 19:28:52.144450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.044 [2024-07-24 19:28:52.144525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.144543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.144552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.044 [2024-07-24 19:28:52.144561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.044 [2024-07-24 19:28:52.144579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-07-24 19:28:52.154474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.044 [2024-07-24 19:28:52.154557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.154575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.154585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.044 [2024-07-24 19:28:52.154593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.044 [2024-07-24 19:28:52.154615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-07-24 19:28:52.164488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.044 [2024-07-24 19:28:52.164563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.164580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.164590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.044 [2024-07-24 19:28:52.164598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.044 [2024-07-24 19:28:52.164615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-07-24 19:28:52.174531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.044 [2024-07-24 19:28:52.174612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.174630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.174639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.044 [2024-07-24 19:28:52.174648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.044 [2024-07-24 19:28:52.174666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-07-24 19:28:52.184561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.044 [2024-07-24 19:28:52.184638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.184655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.184665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.044 [2024-07-24 19:28:52.184673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.044 [2024-07-24 19:28:52.184691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-07-24 19:28:52.194579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.044 [2024-07-24 19:28:52.194651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.194669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.194679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.044 [2024-07-24 19:28:52.194687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.044 [2024-07-24 19:28:52.194705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-07-24 19:28:52.204598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.044 [2024-07-24 19:28:52.204675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.204696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.204706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.044 [2024-07-24 19:28:52.204718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.044 [2024-07-24 19:28:52.204737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-07-24 19:28:52.214647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.044 [2024-07-24 19:28:52.214731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.044 [2024-07-24 19:28:52.214749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.044 [2024-07-24 19:28:52.214759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.045 [2024-07-24 19:28:52.214767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.045 [2024-07-24 19:28:52.214785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-07-24 19:28:52.224674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.045 [2024-07-24 19:28:52.224750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.045 [2024-07-24 19:28:52.224768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.045 [2024-07-24 19:28:52.224778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.045 [2024-07-24 19:28:52.224787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.045 [2024-07-24 19:28:52.224805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-07-24 19:28:52.234744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.045 [2024-07-24 19:28:52.234851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.045 [2024-07-24 19:28:52.234867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.045 [2024-07-24 19:28:52.234878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.045 [2024-07-24 19:28:52.234887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.045 [2024-07-24 19:28:52.234905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-07-24 19:28:52.244718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.045 [2024-07-24 19:28:52.244874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.045 [2024-07-24 19:28:52.244893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.045 [2024-07-24 19:28:52.244904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.045 [2024-07-24 19:28:52.244916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.045 [2024-07-24 19:28:52.244935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-07-24 19:28:52.254749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.045 [2024-07-24 19:28:52.254874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.045 [2024-07-24 19:28:52.254892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.045 [2024-07-24 19:28:52.254902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.045 [2024-07-24 19:28:52.254911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.045 [2024-07-24 19:28:52.254928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-07-24 19:28:52.264790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.045 [2024-07-24 19:28:52.264905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.045 [2024-07-24 19:28:52.264923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.045 [2024-07-24 19:28:52.264933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.045 [2024-07-24 19:28:52.264942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.045 [2024-07-24 19:28:52.264960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-07-24 19:28:52.274800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.045 [2024-07-24 19:28:52.274945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.045 [2024-07-24 19:28:52.274964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.045 [2024-07-24 19:28:52.274973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.045 [2024-07-24 19:28:52.274982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.045 [2024-07-24 19:28:52.275001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.305 [2024-07-24 19:28:52.284825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.305 [2024-07-24 19:28:52.284955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.305 [2024-07-24 19:28:52.284974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.305 [2024-07-24 19:28:52.284984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.305 [2024-07-24 19:28:52.284994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.305 [2024-07-24 19:28:52.285012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.305 qpair failed and we were unable to recover it. 00:28:06.305 [2024-07-24 19:28:52.294847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.294924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.306 [2024-07-24 19:28:52.294942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.306 [2024-07-24 19:28:52.294952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.306 [2024-07-24 19:28:52.294960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.306 [2024-07-24 19:28:52.294979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.306 qpair failed and we were unable to recover it. 00:28:06.306 [2024-07-24 19:28:52.304909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.305025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.306 [2024-07-24 19:28:52.305044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.306 [2024-07-24 19:28:52.305054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.306 [2024-07-24 19:28:52.305064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.306 [2024-07-24 19:28:52.305082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.306 qpair failed and we were unable to recover it. 00:28:06.306 [2024-07-24 19:28:52.314977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.315058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.306 [2024-07-24 19:28:52.315075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.306 [2024-07-24 19:28:52.315085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.306 [2024-07-24 19:28:52.315094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.306 [2024-07-24 19:28:52.315111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.306 qpair failed and we were unable to recover it. 00:28:06.306 [2024-07-24 19:28:52.324985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.325060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.306 [2024-07-24 19:28:52.325078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.306 [2024-07-24 19:28:52.325088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.306 [2024-07-24 19:28:52.325096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.306 [2024-07-24 19:28:52.325114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.306 qpair failed and we were unable to recover it. 00:28:06.306 [2024-07-24 19:28:52.334971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.335047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.306 [2024-07-24 19:28:52.335064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.306 [2024-07-24 19:28:52.335073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.306 [2024-07-24 19:28:52.335085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.306 [2024-07-24 19:28:52.335102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.306 qpair failed and we were unable to recover it. 00:28:06.306 [2024-07-24 19:28:52.345020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.345091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.306 [2024-07-24 19:28:52.345110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.306 [2024-07-24 19:28:52.345119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.306 [2024-07-24 19:28:52.345128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.306 [2024-07-24 19:28:52.345147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.306 qpair failed and we were unable to recover it. 00:28:06.306 [2024-07-24 19:28:52.355046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.355125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.306 [2024-07-24 19:28:52.355142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.306 [2024-07-24 19:28:52.355152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.306 [2024-07-24 19:28:52.355161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.306 [2024-07-24 19:28:52.355178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.306 qpair failed and we were unable to recover it. 00:28:06.306 [2024-07-24 19:28:52.365044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.365126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.306 [2024-07-24 19:28:52.365145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.306 [2024-07-24 19:28:52.365156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.306 [2024-07-24 19:28:52.365165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.306 [2024-07-24 19:28:52.365183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.306 qpair failed and we were unable to recover it. 00:28:06.306 [2024-07-24 19:28:52.375106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.375183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.306 [2024-07-24 19:28:52.375200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.306 [2024-07-24 19:28:52.375210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.306 [2024-07-24 19:28:52.375218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.306 [2024-07-24 19:28:52.375236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.306 qpair failed and we were unable to recover it. 00:28:06.306 [2024-07-24 19:28:52.385122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.385198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.306 [2024-07-24 19:28:52.385215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.306 [2024-07-24 19:28:52.385225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.306 [2024-07-24 19:28:52.385234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.306 [2024-07-24 19:28:52.385251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.306 qpair failed and we were unable to recover it. 00:28:06.306 [2024-07-24 19:28:52.395147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.395248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.306 [2024-07-24 19:28:52.395266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.306 [2024-07-24 19:28:52.395276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.306 [2024-07-24 19:28:52.395286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.306 [2024-07-24 19:28:52.395305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.306 qpair failed and we were unable to recover it. 00:28:06.306 [2024-07-24 19:28:52.405171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.405260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.306 [2024-07-24 19:28:52.405277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.306 [2024-07-24 19:28:52.405288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.306 [2024-07-24 19:28:52.405296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.306 [2024-07-24 19:28:52.405314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.306 qpair failed and we were unable to recover it. 00:28:06.306 [2024-07-24 19:28:52.415173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.306 [2024-07-24 19:28:52.415263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.415280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.307 [2024-07-24 19:28:52.415289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.307 [2024-07-24 19:28:52.415298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.307 [2024-07-24 19:28:52.415316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.307 qpair failed and we were unable to recover it. 00:28:06.307 [2024-07-24 19:28:52.425241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.307 [2024-07-24 19:28:52.425317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.425335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.307 [2024-07-24 19:28:52.425348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.307 [2024-07-24 19:28:52.425356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.307 [2024-07-24 19:28:52.425374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.307 qpair failed and we were unable to recover it. 00:28:06.307 [2024-07-24 19:28:52.435263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.307 [2024-07-24 19:28:52.435375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.435394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.307 [2024-07-24 19:28:52.435404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.307 [2024-07-24 19:28:52.435412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.307 [2024-07-24 19:28:52.435431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.307 qpair failed and we were unable to recover it. 00:28:06.307 [2024-07-24 19:28:52.445278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.307 [2024-07-24 19:28:52.445350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.445368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.307 [2024-07-24 19:28:52.445378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.307 [2024-07-24 19:28:52.445386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.307 [2024-07-24 19:28:52.445404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.307 qpair failed and we were unable to recover it. 00:28:06.307 [2024-07-24 19:28:52.455325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.307 [2024-07-24 19:28:52.455413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.455430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.307 [2024-07-24 19:28:52.455440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.307 [2024-07-24 19:28:52.455449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.307 [2024-07-24 19:28:52.455467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.307 qpair failed and we were unable to recover it. 00:28:06.307 [2024-07-24 19:28:52.465349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.307 [2024-07-24 19:28:52.465426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.465443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.307 [2024-07-24 19:28:52.465453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.307 [2024-07-24 19:28:52.465461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.307 [2024-07-24 19:28:52.465479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.307 qpair failed and we were unable to recover it. 00:28:06.307 [2024-07-24 19:28:52.475386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.307 [2024-07-24 19:28:52.475463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.475482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.307 [2024-07-24 19:28:52.475492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.307 [2024-07-24 19:28:52.475501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.307 [2024-07-24 19:28:52.475519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.307 qpair failed and we were unable to recover it. 00:28:06.307 [2024-07-24 19:28:52.485393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.307 [2024-07-24 19:28:52.485469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.485486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.307 [2024-07-24 19:28:52.485496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.307 [2024-07-24 19:28:52.485505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.307 [2024-07-24 19:28:52.485523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.307 qpair failed and we were unable to recover it. 00:28:06.307 [2024-07-24 19:28:52.495434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.307 [2024-07-24 19:28:52.495510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.495527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.307 [2024-07-24 19:28:52.495537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.307 [2024-07-24 19:28:52.495546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.307 [2024-07-24 19:28:52.495564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.307 qpair failed and we were unable to recover it. 00:28:06.307 [2024-07-24 19:28:52.505465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.307 [2024-07-24 19:28:52.505539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.505556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.307 [2024-07-24 19:28:52.505565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.307 [2024-07-24 19:28:52.505574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.307 [2024-07-24 19:28:52.505592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.307 qpair failed and we were unable to recover it. 00:28:06.307 [2024-07-24 19:28:52.515490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.307 [2024-07-24 19:28:52.515564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.515585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.307 [2024-07-24 19:28:52.515595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.307 [2024-07-24 19:28:52.515603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.307 [2024-07-24 19:28:52.515621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.307 qpair failed and we were unable to recover it. 00:28:06.307 [2024-07-24 19:28:52.525518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.307 [2024-07-24 19:28:52.525598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.525616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.307 [2024-07-24 19:28:52.525626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.307 [2024-07-24 19:28:52.525635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.307 [2024-07-24 19:28:52.525652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.307 qpair failed and we were unable to recover it. 00:28:06.307 [2024-07-24 19:28:52.535542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.307 [2024-07-24 19:28:52.535621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.307 [2024-07-24 19:28:52.535638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.308 [2024-07-24 19:28:52.535648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.308 [2024-07-24 19:28:52.535657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.308 [2024-07-24 19:28:52.535674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.308 qpair failed and we were unable to recover it. 00:28:06.568 [2024-07-24 19:28:52.545606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.568 [2024-07-24 19:28:52.545678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.568 [2024-07-24 19:28:52.545696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.569 [2024-07-24 19:28:52.545706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.569 [2024-07-24 19:28:52.545718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.569 [2024-07-24 19:28:52.545737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.569 qpair failed and we were unable to recover it. 00:28:06.569 [2024-07-24 19:28:52.555613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.569 [2024-07-24 19:28:52.555731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.569 [2024-07-24 19:28:52.555749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.569 [2024-07-24 19:28:52.555758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.569 [2024-07-24 19:28:52.555767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.569 [2024-07-24 19:28:52.555788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.569 qpair failed and we were unable to recover it. 00:28:06.569 [2024-07-24 19:28:52.565617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.569 [2024-07-24 19:28:52.565692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.569 [2024-07-24 19:28:52.565710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.569 [2024-07-24 19:28:52.565723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.570 [2024-07-24 19:28:52.565731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.570 [2024-07-24 19:28:52.565750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.570 qpair failed and we were unable to recover it. 00:28:06.570 [2024-07-24 19:28:52.575675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.570 [2024-07-24 19:28:52.575754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.570 [2024-07-24 19:28:52.575772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.570 [2024-07-24 19:28:52.575782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.570 [2024-07-24 19:28:52.575790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.570 [2024-07-24 19:28:52.575808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.570 qpair failed and we were unable to recover it. 00:28:06.570 [2024-07-24 19:28:52.585697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.570 [2024-07-24 19:28:52.585773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.570 [2024-07-24 19:28:52.585790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.570 [2024-07-24 19:28:52.585800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.570 [2024-07-24 19:28:52.585808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.570 [2024-07-24 19:28:52.585826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.570 qpair failed and we were unable to recover it. 00:28:06.570 [2024-07-24 19:28:52.595731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.570 [2024-07-24 19:28:52.595810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.571 [2024-07-24 19:28:52.595829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.571 [2024-07-24 19:28:52.595839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.571 [2024-07-24 19:28:52.595848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.571 [2024-07-24 19:28:52.595866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.571 qpair failed and we were unable to recover it. 00:28:06.571 [2024-07-24 19:28:52.605783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.571 [2024-07-24 19:28:52.605862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.571 [2024-07-24 19:28:52.605884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.571 [2024-07-24 19:28:52.605894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.571 [2024-07-24 19:28:52.605904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.571 [2024-07-24 19:28:52.605922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.571 qpair failed and we were unable to recover it. 00:28:06.571 [2024-07-24 19:28:52.615780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.571 [2024-07-24 19:28:52.615859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.571 [2024-07-24 19:28:52.615878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.572 [2024-07-24 19:28:52.615889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.572 [2024-07-24 19:28:52.615898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.572 [2024-07-24 19:28:52.615917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.572 qpair failed and we were unable to recover it. 00:28:06.572 [2024-07-24 19:28:52.625808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.572 [2024-07-24 19:28:52.625888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.572 [2024-07-24 19:28:52.625905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.572 [2024-07-24 19:28:52.625915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.572 [2024-07-24 19:28:52.625924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.572 [2024-07-24 19:28:52.625942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.572 qpair failed and we were unable to recover it. 00:28:06.572 [2024-07-24 19:28:52.635857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.572 [2024-07-24 19:28:52.635971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.572 [2024-07-24 19:28:52.635989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.572 [2024-07-24 19:28:52.636000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.572 [2024-07-24 19:28:52.636009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.572 [2024-07-24 19:28:52.636028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.572 qpair failed and we were unable to recover it. 00:28:06.572 [2024-07-24 19:28:52.645863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.572 [2024-07-24 19:28:52.646014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.572 [2024-07-24 19:28:52.646032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.572 [2024-07-24 19:28:52.646042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.572 [2024-07-24 19:28:52.646051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.572 [2024-07-24 19:28:52.646072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.572 qpair failed and we were unable to recover it. 00:28:06.572 [2024-07-24 19:28:52.655903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.572 [2024-07-24 19:28:52.655977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.572 [2024-07-24 19:28:52.655995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.572 [2024-07-24 19:28:52.656005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.572 [2024-07-24 19:28:52.656013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.572 [2024-07-24 19:28:52.656031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.572 qpair failed and we were unable to recover it. 00:28:06.572 [2024-07-24 19:28:52.665934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.572 [2024-07-24 19:28:52.666081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.572 [2024-07-24 19:28:52.666099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.572 [2024-07-24 19:28:52.666110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.572 [2024-07-24 19:28:52.666119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.572 [2024-07-24 19:28:52.666138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.572 qpair failed and we were unable to recover it. 00:28:06.572 [2024-07-24 19:28:52.675965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.572 [2024-07-24 19:28:52.676039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.572 [2024-07-24 19:28:52.676056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.572 [2024-07-24 19:28:52.676066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.572 [2024-07-24 19:28:52.676075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.572 [2024-07-24 19:28:52.676093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.573 qpair failed and we were unable to recover it. 00:28:06.573 [2024-07-24 19:28:52.685969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.573 [2024-07-24 19:28:52.686118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.573 [2024-07-24 19:28:52.686136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.573 [2024-07-24 19:28:52.686146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.573 [2024-07-24 19:28:52.686155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.573 [2024-07-24 19:28:52.686174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.573 qpair failed and we were unable to recover it. 00:28:06.573 [2024-07-24 19:28:52.696054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.573 [2024-07-24 19:28:52.696139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.573 [2024-07-24 19:28:52.696157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.573 [2024-07-24 19:28:52.696167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.573 [2024-07-24 19:28:52.696175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.573 [2024-07-24 19:28:52.696193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.573 qpair failed and we were unable to recover it. 00:28:06.573 [2024-07-24 19:28:52.706030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.573 [2024-07-24 19:28:52.706108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.573 [2024-07-24 19:28:52.706127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.573 [2024-07-24 19:28:52.706137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.573 [2024-07-24 19:28:52.706146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.574 [2024-07-24 19:28:52.706164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.574 qpair failed and we were unable to recover it. 00:28:06.574 [2024-07-24 19:28:52.716064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.574 [2024-07-24 19:28:52.716185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.574 [2024-07-24 19:28:52.716204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.574 [2024-07-24 19:28:52.716214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.574 [2024-07-24 19:28:52.716223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.574 [2024-07-24 19:28:52.716241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.574 qpair failed and we were unable to recover it. 00:28:06.574 [2024-07-24 19:28:52.726070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.574 [2024-07-24 19:28:52.726150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.574 [2024-07-24 19:28:52.726169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.574 [2024-07-24 19:28:52.726179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.574 [2024-07-24 19:28:52.726188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.574 [2024-07-24 19:28:52.726207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.574 qpair failed and we were unable to recover it. 00:28:06.575 [2024-07-24 19:28:52.736112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.575 [2024-07-24 19:28:52.736228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.575 [2024-07-24 19:28:52.736246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.575 [2024-07-24 19:28:52.736257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.575 [2024-07-24 19:28:52.736269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.575 [2024-07-24 19:28:52.736287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.575 qpair failed and we were unable to recover it. 00:28:06.575 [2024-07-24 19:28:52.746152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.575 [2024-07-24 19:28:52.746228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.575 [2024-07-24 19:28:52.746246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.575 [2024-07-24 19:28:52.746256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.575 [2024-07-24 19:28:52.746264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.575 [2024-07-24 19:28:52.746282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.575 qpair failed and we were unable to recover it. 00:28:06.575 [2024-07-24 19:28:52.756100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.575 [2024-07-24 19:28:52.756183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.576 [2024-07-24 19:28:52.756199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.576 [2024-07-24 19:28:52.756209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.576 [2024-07-24 19:28:52.756218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.576 [2024-07-24 19:28:52.756235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.576 qpair failed and we were unable to recover it. 00:28:06.576 [2024-07-24 19:28:52.766167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.576 [2024-07-24 19:28:52.766242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.576 [2024-07-24 19:28:52.766259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.576 [2024-07-24 19:28:52.766269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.576 [2024-07-24 19:28:52.766278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.576 [2024-07-24 19:28:52.766295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.576 qpair failed and we were unable to recover it. 00:28:06.576 [2024-07-24 19:28:52.776211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.576 [2024-07-24 19:28:52.776290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.576 [2024-07-24 19:28:52.776308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.576 [2024-07-24 19:28:52.776319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.576 [2024-07-24 19:28:52.776327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.577 [2024-07-24 19:28:52.776345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.577 qpair failed and we were unable to recover it. 00:28:06.577 [2024-07-24 19:28:52.786269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.577 [2024-07-24 19:28:52.786345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.577 [2024-07-24 19:28:52.786363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.577 [2024-07-24 19:28:52.786373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.577 [2024-07-24 19:28:52.786381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.577 [2024-07-24 19:28:52.786399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.577 qpair failed and we were unable to recover it. 00:28:06.577 [2024-07-24 19:28:52.796288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.577 [2024-07-24 19:28:52.796359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.577 [2024-07-24 19:28:52.796377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.577 [2024-07-24 19:28:52.796387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.577 [2024-07-24 19:28:52.796396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.577 [2024-07-24 19:28:52.796413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.577 qpair failed and we were unable to recover it. 00:28:06.577 [2024-07-24 19:28:52.806293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.577 [2024-07-24 19:28:52.806371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.577 [2024-07-24 19:28:52.806389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.577 [2024-07-24 19:28:52.806399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.578 [2024-07-24 19:28:52.806407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.578 [2024-07-24 19:28:52.806425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.578 qpair failed and we were unable to recover it. 00:28:06.843 [2024-07-24 19:28:52.816278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.843 [2024-07-24 19:28:52.816353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.843 [2024-07-24 19:28:52.816373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.843 [2024-07-24 19:28:52.816384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.843 [2024-07-24 19:28:52.816393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.843 [2024-07-24 19:28:52.816412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.843 qpair failed and we were unable to recover it. 00:28:06.843 [2024-07-24 19:28:52.826313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.843 [2024-07-24 19:28:52.826385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.843 [2024-07-24 19:28:52.826402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.843 [2024-07-24 19:28:52.826416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.843 [2024-07-24 19:28:52.826425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.843 [2024-07-24 19:28:52.826442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.843 qpair failed and we were unable to recover it. 00:28:06.843 [2024-07-24 19:28:52.836400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.843 [2024-07-24 19:28:52.836483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.843 [2024-07-24 19:28:52.836501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.843 [2024-07-24 19:28:52.836511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.843 [2024-07-24 19:28:52.836519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.843 [2024-07-24 19:28:52.836537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.843 qpair failed and we were unable to recover it. 00:28:06.843 [2024-07-24 19:28:52.846386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.843 [2024-07-24 19:28:52.846468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.843 [2024-07-24 19:28:52.846485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.843 [2024-07-24 19:28:52.846495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.843 [2024-07-24 19:28:52.846504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.843 [2024-07-24 19:28:52.846522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.843 qpair failed and we were unable to recover it. 00:28:06.843 [2024-07-24 19:28:52.856463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.843 [2024-07-24 19:28:52.856543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.843 [2024-07-24 19:28:52.856561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.843 [2024-07-24 19:28:52.856571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.843 [2024-07-24 19:28:52.856580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.843 [2024-07-24 19:28:52.856598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.843 qpair failed and we were unable to recover it. 00:28:06.843 [2024-07-24 19:28:52.866455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.843 [2024-07-24 19:28:52.866580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.843 [2024-07-24 19:28:52.866599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.843 [2024-07-24 19:28:52.866609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.843 [2024-07-24 19:28:52.866618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.843 [2024-07-24 19:28:52.866637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.843 qpair failed and we were unable to recover it. 00:28:06.843 [2024-07-24 19:28:52.876442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.843 [2024-07-24 19:28:52.876543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.843 [2024-07-24 19:28:52.876561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.843 [2024-07-24 19:28:52.876572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.843 [2024-07-24 19:28:52.876581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.843 [2024-07-24 19:28:52.876599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.843 qpair failed and we were unable to recover it. 00:28:06.843 [2024-07-24 19:28:52.886470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.843 [2024-07-24 19:28:52.886548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.843 [2024-07-24 19:28:52.886565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.843 [2024-07-24 19:28:52.886575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.843 [2024-07-24 19:28:52.886584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.843 [2024-07-24 19:28:52.886602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.843 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:52.896616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.844 [2024-07-24 19:28:52.896729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.844 [2024-07-24 19:28:52.896747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.844 [2024-07-24 19:28:52.896757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.844 [2024-07-24 19:28:52.896766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.844 [2024-07-24 19:28:52.896784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.844 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:52.906595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.844 [2024-07-24 19:28:52.906673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.844 [2024-07-24 19:28:52.906690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.844 [2024-07-24 19:28:52.906700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.844 [2024-07-24 19:28:52.906709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.844 [2024-07-24 19:28:52.906741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.844 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:52.916625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.844 [2024-07-24 19:28:52.916705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.844 [2024-07-24 19:28:52.916727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.844 [2024-07-24 19:28:52.916740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.844 [2024-07-24 19:28:52.916749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.844 [2024-07-24 19:28:52.916767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.844 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:52.926651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.844 [2024-07-24 19:28:52.926735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.844 [2024-07-24 19:28:52.926753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.844 [2024-07-24 19:28:52.926763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.844 [2024-07-24 19:28:52.926771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.844 [2024-07-24 19:28:52.926790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.844 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:52.936655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.844 [2024-07-24 19:28:52.936739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.844 [2024-07-24 19:28:52.936757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.844 [2024-07-24 19:28:52.936767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.844 [2024-07-24 19:28:52.936776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.844 [2024-07-24 19:28:52.936795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.844 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:52.946644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.844 [2024-07-24 19:28:52.946725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.844 [2024-07-24 19:28:52.946742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.844 [2024-07-24 19:28:52.946753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.844 [2024-07-24 19:28:52.946761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.844 [2024-07-24 19:28:52.946779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.844 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:52.956734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.844 [2024-07-24 19:28:52.956819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.844 [2024-07-24 19:28:52.956837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.844 [2024-07-24 19:28:52.956847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.844 [2024-07-24 19:28:52.956856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.844 [2024-07-24 19:28:52.956875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.844 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:52.966720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.844 [2024-07-24 19:28:52.966840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.844 [2024-07-24 19:28:52.966859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.844 [2024-07-24 19:28:52.966868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.844 [2024-07-24 19:28:52.966878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.844 [2024-07-24 19:28:52.966896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.844 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:52.976806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.844 [2024-07-24 19:28:52.976882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.844 [2024-07-24 19:28:52.976899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.844 [2024-07-24 19:28:52.976910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.844 [2024-07-24 19:28:52.976918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.844 [2024-07-24 19:28:52.976936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.844 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:52.986828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.844 [2024-07-24 19:28:52.986906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.844 [2024-07-24 19:28:52.986923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.844 [2024-07-24 19:28:52.986933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.844 [2024-07-24 19:28:52.986942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.844 [2024-07-24 19:28:52.986960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.844 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:52.996788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.844 [2024-07-24 19:28:52.996862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.844 [2024-07-24 19:28:52.996880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.844 [2024-07-24 19:28:52.996890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.844 [2024-07-24 19:28:52.996899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.844 [2024-07-24 19:28:52.996917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.844 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:53.006963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.844 [2024-07-24 19:28:53.007040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.844 [2024-07-24 19:28:53.007060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.844 [2024-07-24 19:28:53.007070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.844 [2024-07-24 19:28:53.007079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.844 [2024-07-24 19:28:53.007097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.844 qpair failed and we were unable to recover it. 00:28:06.844 [2024-07-24 19:28:53.016936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.845 [2024-07-24 19:28:53.017026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.845 [2024-07-24 19:28:53.017043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.845 [2024-07-24 19:28:53.017053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.845 [2024-07-24 19:28:53.017062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.845 [2024-07-24 19:28:53.017080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.845 qpair failed and we were unable to recover it. 00:28:06.845 [2024-07-24 19:28:53.026964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.845 [2024-07-24 19:28:53.027080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.845 [2024-07-24 19:28:53.027099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.845 [2024-07-24 19:28:53.027109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.845 [2024-07-24 19:28:53.027118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.845 [2024-07-24 19:28:53.027136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.845 qpair failed and we were unable to recover it. 00:28:06.845 [2024-07-24 19:28:53.036974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.845 [2024-07-24 19:28:53.037051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.845 [2024-07-24 19:28:53.037068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.845 [2024-07-24 19:28:53.037078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.845 [2024-07-24 19:28:53.037087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.845 [2024-07-24 19:28:53.037105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.845 qpair failed and we were unable to recover it. 00:28:06.845 [2024-07-24 19:28:53.046974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.845 [2024-07-24 19:28:53.047051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.845 [2024-07-24 19:28:53.047069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.845 [2024-07-24 19:28:53.047079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.845 [2024-07-24 19:28:53.047087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.845 [2024-07-24 19:28:53.047108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.845 qpair failed and we were unable to recover it. 00:28:06.845 [2024-07-24 19:28:53.057035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.845 [2024-07-24 19:28:53.057112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.845 [2024-07-24 19:28:53.057129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.845 [2024-07-24 19:28:53.057139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.845 [2024-07-24 19:28:53.057148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.845 [2024-07-24 19:28:53.057165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.845 qpair failed and we were unable to recover it. 00:28:06.845 [2024-07-24 19:28:53.067041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.845 [2024-07-24 19:28:53.067184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.845 [2024-07-24 19:28:53.067202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.845 [2024-07-24 19:28:53.067213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.845 [2024-07-24 19:28:53.067221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.845 [2024-07-24 19:28:53.067239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.845 qpair failed and we were unable to recover it. 00:28:06.845 [2024-07-24 19:28:53.077094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.845 [2024-07-24 19:28:53.077168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.845 [2024-07-24 19:28:53.077185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.845 [2024-07-24 19:28:53.077195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.845 [2024-07-24 19:28:53.077204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:06.845 [2024-07-24 19:28:53.077221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.845 qpair failed and we were unable to recover it. 00:28:07.106 [2024-07-24 19:28:53.087083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.106 [2024-07-24 19:28:53.087160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.106 [2024-07-24 19:28:53.087178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.106 [2024-07-24 19:28:53.087188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.106 [2024-07-24 19:28:53.087197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.106 [2024-07-24 19:28:53.087215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.106 qpair failed and we were unable to recover it. 00:28:07.106 [2024-07-24 19:28:53.097134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.106 [2024-07-24 19:28:53.097209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.106 [2024-07-24 19:28:53.097230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.106 [2024-07-24 19:28:53.097240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.106 [2024-07-24 19:28:53.097249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.106 [2024-07-24 19:28:53.097266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.106 qpair failed and we were unable to recover it. 00:28:07.106 [2024-07-24 19:28:53.107167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.106 [2024-07-24 19:28:53.107245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.106 [2024-07-24 19:28:53.107264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.106 [2024-07-24 19:28:53.107275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.106 [2024-07-24 19:28:53.107284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.106 [2024-07-24 19:28:53.107303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.106 qpair failed and we were unable to recover it. 00:28:07.106 [2024-07-24 19:28:53.117204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.106 [2024-07-24 19:28:53.117274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.106 [2024-07-24 19:28:53.117292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.106 [2024-07-24 19:28:53.117302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.106 [2024-07-24 19:28:53.117311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.106 [2024-07-24 19:28:53.117329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.106 qpair failed and we were unable to recover it. 00:28:07.106 [2024-07-24 19:28:53.127246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.106 [2024-07-24 19:28:53.127319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.106 [2024-07-24 19:28:53.127336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.106 [2024-07-24 19:28:53.127346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.106 [2024-07-24 19:28:53.127355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.106 [2024-07-24 19:28:53.127373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.106 qpair failed and we were unable to recover it. 00:28:07.106 [2024-07-24 19:28:53.137252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.106 [2024-07-24 19:28:53.137330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.106 [2024-07-24 19:28:53.137347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.106 [2024-07-24 19:28:53.137357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.106 [2024-07-24 19:28:53.137368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.106 [2024-07-24 19:28:53.137386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.106 qpair failed and we were unable to recover it. 00:28:07.106 [2024-07-24 19:28:53.147225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.106 [2024-07-24 19:28:53.147303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.106 [2024-07-24 19:28:53.147321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.106 [2024-07-24 19:28:53.147331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.106 [2024-07-24 19:28:53.147339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.106 [2024-07-24 19:28:53.147357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.106 qpair failed and we were unable to recover it. 00:28:07.106 [2024-07-24 19:28:53.157300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.106 [2024-07-24 19:28:53.157379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.106 [2024-07-24 19:28:53.157398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.106 [2024-07-24 19:28:53.157409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.157418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.157437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.107 qpair failed and we were unable to recover it. 00:28:07.107 [2024-07-24 19:28:53.167268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.107 [2024-07-24 19:28:53.167343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.107 [2024-07-24 19:28:53.167360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.107 [2024-07-24 19:28:53.167370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.167379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.167397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.107 qpair failed and we were unable to recover it. 00:28:07.107 [2024-07-24 19:28:53.177367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.107 [2024-07-24 19:28:53.177488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.107 [2024-07-24 19:28:53.177506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.107 [2024-07-24 19:28:53.177516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.177526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.177543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.107 qpair failed and we were unable to recover it. 00:28:07.107 [2024-07-24 19:28:53.187328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.107 [2024-07-24 19:28:53.187409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.107 [2024-07-24 19:28:53.187428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.107 [2024-07-24 19:28:53.187438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.187446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.187464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.107 qpair failed and we were unable to recover it. 00:28:07.107 [2024-07-24 19:28:53.197354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.107 [2024-07-24 19:28:53.197427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.107 [2024-07-24 19:28:53.197444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.107 [2024-07-24 19:28:53.197454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.197463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.197481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.107 qpair failed and we were unable to recover it. 00:28:07.107 [2024-07-24 19:28:53.207537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.107 [2024-07-24 19:28:53.207610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.107 [2024-07-24 19:28:53.207628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.107 [2024-07-24 19:28:53.207638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.207647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.207665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.107 qpair failed and we were unable to recover it. 00:28:07.107 [2024-07-24 19:28:53.217513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.107 [2024-07-24 19:28:53.217586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.107 [2024-07-24 19:28:53.217604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.107 [2024-07-24 19:28:53.217614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.217623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.217641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.107 qpair failed and we were unable to recover it. 00:28:07.107 [2024-07-24 19:28:53.227509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.107 [2024-07-24 19:28:53.227580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.107 [2024-07-24 19:28:53.227598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.107 [2024-07-24 19:28:53.227613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.227622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.227640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.107 qpair failed and we were unable to recover it. 00:28:07.107 [2024-07-24 19:28:53.237534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.107 [2024-07-24 19:28:53.237605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.107 [2024-07-24 19:28:53.237622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.107 [2024-07-24 19:28:53.237631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.237640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.237658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.107 qpair failed and we were unable to recover it. 00:28:07.107 [2024-07-24 19:28:53.247506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.107 [2024-07-24 19:28:53.247578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.107 [2024-07-24 19:28:53.247596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.107 [2024-07-24 19:28:53.247605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.247614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.247632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.107 qpair failed and we were unable to recover it. 00:28:07.107 [2024-07-24 19:28:53.257531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.107 [2024-07-24 19:28:53.257607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.107 [2024-07-24 19:28:53.257624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.107 [2024-07-24 19:28:53.257635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.257643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.257661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.107 qpair failed and we were unable to recover it. 00:28:07.107 [2024-07-24 19:28:53.267576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.107 [2024-07-24 19:28:53.267665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.107 [2024-07-24 19:28:53.267681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.107 [2024-07-24 19:28:53.267691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.267700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.267722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.107 qpair failed and we were unable to recover it. 00:28:07.107 [2024-07-24 19:28:53.277636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.107 [2024-07-24 19:28:53.277726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.107 [2024-07-24 19:28:53.277746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.107 [2024-07-24 19:28:53.277757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.107 [2024-07-24 19:28:53.277766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.107 [2024-07-24 19:28:53.277784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.108 qpair failed and we were unable to recover it. 00:28:07.108 [2024-07-24 19:28:53.287672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.108 [2024-07-24 19:28:53.287754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.108 [2024-07-24 19:28:53.287772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.108 [2024-07-24 19:28:53.287782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.108 [2024-07-24 19:28:53.287791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.108 [2024-07-24 19:28:53.287809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.108 qpair failed and we were unable to recover it. 00:28:07.108 [2024-07-24 19:28:53.297665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.108 [2024-07-24 19:28:53.297750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.108 [2024-07-24 19:28:53.297768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.108 [2024-07-24 19:28:53.297778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.108 [2024-07-24 19:28:53.297787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.108 [2024-07-24 19:28:53.297805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.108 qpair failed and we were unable to recover it. 00:28:07.108 [2024-07-24 19:28:53.307675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.108 [2024-07-24 19:28:53.307766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.108 [2024-07-24 19:28:53.307784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.108 [2024-07-24 19:28:53.307794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.108 [2024-07-24 19:28:53.307803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.108 [2024-07-24 19:28:53.307822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.108 qpair failed and we were unable to recover it. 00:28:07.108 [2024-07-24 19:28:53.317719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.108 [2024-07-24 19:28:53.317812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.108 [2024-07-24 19:28:53.317829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.108 [2024-07-24 19:28:53.317842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.108 [2024-07-24 19:28:53.317850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.108 [2024-07-24 19:28:53.317869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.108 qpair failed and we were unable to recover it. 00:28:07.108 [2024-07-24 19:28:53.327833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.108 [2024-07-24 19:28:53.327916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.108 [2024-07-24 19:28:53.327933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.108 [2024-07-24 19:28:53.327943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.108 [2024-07-24 19:28:53.327951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.108 [2024-07-24 19:28:53.327969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.108 qpair failed and we were unable to recover it. 00:28:07.108 [2024-07-24 19:28:53.337791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.108 [2024-07-24 19:28:53.337864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.108 [2024-07-24 19:28:53.337881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.108 [2024-07-24 19:28:53.337891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.108 [2024-07-24 19:28:53.337899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.108 [2024-07-24 19:28:53.337918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.108 qpair failed and we were unable to recover it. 00:28:07.368 [2024-07-24 19:28:53.347806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.368 [2024-07-24 19:28:53.347961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.368 [2024-07-24 19:28:53.347980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.368 [2024-07-24 19:28:53.347990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.368 [2024-07-24 19:28:53.347999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.368 [2024-07-24 19:28:53.348018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.368 qpair failed and we were unable to recover it. 00:28:07.368 [2024-07-24 19:28:53.357930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.368 [2024-07-24 19:28:53.358043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.368 [2024-07-24 19:28:53.358069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.368 [2024-07-24 19:28:53.358080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.368 [2024-07-24 19:28:53.358089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.368 [2024-07-24 19:28:53.358107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.368 qpair failed and we were unable to recover it. 00:28:07.368 [2024-07-24 19:28:53.367932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.368 [2024-07-24 19:28:53.368008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.368 [2024-07-24 19:28:53.368025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.368 [2024-07-24 19:28:53.368035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.368 [2024-07-24 19:28:53.368044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.368 [2024-07-24 19:28:53.368062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.368 qpair failed and we were unable to recover it. 00:28:07.368 [2024-07-24 19:28:53.377886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.368 [2024-07-24 19:28:53.377975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.368 [2024-07-24 19:28:53.377992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.368 [2024-07-24 19:28:53.378002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.368 [2024-07-24 19:28:53.378010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.368 [2024-07-24 19:28:53.378028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.368 qpair failed and we were unable to recover it. 00:28:07.368 [2024-07-24 19:28:53.388004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.368 [2024-07-24 19:28:53.388108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.368 [2024-07-24 19:28:53.388125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.368 [2024-07-24 19:28:53.388135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.368 [2024-07-24 19:28:53.388144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.368 [2024-07-24 19:28:53.388162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.368 qpair failed and we were unable to recover it. 00:28:07.368 [2024-07-24 19:28:53.398014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.368 [2024-07-24 19:28:53.398087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.368 [2024-07-24 19:28:53.398105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.368 [2024-07-24 19:28:53.398115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.368 [2024-07-24 19:28:53.398124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.368 [2024-07-24 19:28:53.398141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.368 qpair failed and we were unable to recover it. 00:28:07.368 [2024-07-24 19:28:53.408018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.368 [2024-07-24 19:28:53.408093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.368 [2024-07-24 19:28:53.408114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.368 [2024-07-24 19:28:53.408124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.368 [2024-07-24 19:28:53.408132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.368 [2024-07-24 19:28:53.408151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.368 qpair failed and we were unable to recover it. 00:28:07.368 [2024-07-24 19:28:53.418067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.368 [2024-07-24 19:28:53.418141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.368 [2024-07-24 19:28:53.418160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.368 [2024-07-24 19:28:53.418169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.368 [2024-07-24 19:28:53.418179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.368 [2024-07-24 19:28:53.418197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.368 qpair failed and we were unable to recover it. 00:28:07.368 [2024-07-24 19:28:53.428071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.368 [2024-07-24 19:28:53.428155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.368 [2024-07-24 19:28:53.428172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.368 [2024-07-24 19:28:53.428182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.368 [2024-07-24 19:28:53.428191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.368 [2024-07-24 19:28:53.428210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.368 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.438126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.438214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.369 [2024-07-24 19:28:53.438231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.369 [2024-07-24 19:28:53.438241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.369 [2024-07-24 19:28:53.438250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.369 [2024-07-24 19:28:53.438268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.448119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.448193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.369 [2024-07-24 19:28:53.448210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.369 [2024-07-24 19:28:53.448220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.369 [2024-07-24 19:28:53.448229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.369 [2024-07-24 19:28:53.448250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.458078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.458156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.369 [2024-07-24 19:28:53.458173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.369 [2024-07-24 19:28:53.458183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.369 [2024-07-24 19:28:53.458192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.369 [2024-07-24 19:28:53.458210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.468186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.468264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.369 [2024-07-24 19:28:53.468281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.369 [2024-07-24 19:28:53.468291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.369 [2024-07-24 19:28:53.468299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.369 [2024-07-24 19:28:53.468317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.478253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.478331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.369 [2024-07-24 19:28:53.478348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.369 [2024-07-24 19:28:53.478358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.369 [2024-07-24 19:28:53.478366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.369 [2024-07-24 19:28:53.478384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.488236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.488312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.369 [2024-07-24 19:28:53.488329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.369 [2024-07-24 19:28:53.488339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.369 [2024-07-24 19:28:53.488347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.369 [2024-07-24 19:28:53.488366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.498287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.498365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.369 [2024-07-24 19:28:53.498385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.369 [2024-07-24 19:28:53.498395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.369 [2024-07-24 19:28:53.498403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.369 [2024-07-24 19:28:53.498421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.508307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.508382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.369 [2024-07-24 19:28:53.508399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.369 [2024-07-24 19:28:53.508409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.369 [2024-07-24 19:28:53.508418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.369 [2024-07-24 19:28:53.508435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.518382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.518458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.369 [2024-07-24 19:28:53.518475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.369 [2024-07-24 19:28:53.518485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.369 [2024-07-24 19:28:53.518494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.369 [2024-07-24 19:28:53.518511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.528362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.528436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.369 [2024-07-24 19:28:53.528454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.369 [2024-07-24 19:28:53.528463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.369 [2024-07-24 19:28:53.528472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.369 [2024-07-24 19:28:53.528490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.538407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.538533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.369 [2024-07-24 19:28:53.538551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.369 [2024-07-24 19:28:53.538561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.369 [2024-07-24 19:28:53.538573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.369 [2024-07-24 19:28:53.538591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.548452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.548542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.369 [2024-07-24 19:28:53.548560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.369 [2024-07-24 19:28:53.548569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.369 [2024-07-24 19:28:53.548578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.369 [2024-07-24 19:28:53.548596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-07-24 19:28:53.558458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.369 [2024-07-24 19:28:53.558538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.370 [2024-07-24 19:28:53.558555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.370 [2024-07-24 19:28:53.558565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.370 [2024-07-24 19:28:53.558574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.370 [2024-07-24 19:28:53.558591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-07-24 19:28:53.568470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.370 [2024-07-24 19:28:53.568558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.370 [2024-07-24 19:28:53.568575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.370 [2024-07-24 19:28:53.568584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.370 [2024-07-24 19:28:53.568593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.370 [2024-07-24 19:28:53.568611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-07-24 19:28:53.578513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.370 [2024-07-24 19:28:53.578589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.370 [2024-07-24 19:28:53.578606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.370 [2024-07-24 19:28:53.578616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.370 [2024-07-24 19:28:53.578625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.370 [2024-07-24 19:28:53.578643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-07-24 19:28:53.588545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.370 [2024-07-24 19:28:53.588624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.370 [2024-07-24 19:28:53.588641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.370 [2024-07-24 19:28:53.588650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.370 [2024-07-24 19:28:53.588659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.370 [2024-07-24 19:28:53.588677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-07-24 19:28:53.598599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.370 [2024-07-24 19:28:53.598686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.370 [2024-07-24 19:28:53.598703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.370 [2024-07-24 19:28:53.598713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.370 [2024-07-24 19:28:53.598726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.370 [2024-07-24 19:28:53.598744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.631 [2024-07-24 19:28:53.608578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.631 [2024-07-24 19:28:53.608655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.631 [2024-07-24 19:28:53.608673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.631 [2024-07-24 19:28:53.608684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.631 [2024-07-24 19:28:53.608693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.631 [2024-07-24 19:28:53.608711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.631 qpair failed and we were unable to recover it. 00:28:07.631 [2024-07-24 19:28:53.618652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.631 [2024-07-24 19:28:53.618736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.631 [2024-07-24 19:28:53.618754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.631 [2024-07-24 19:28:53.618764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.631 [2024-07-24 19:28:53.618772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.631 [2024-07-24 19:28:53.618790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.631 qpair failed and we were unable to recover it. 00:28:07.631 [2024-07-24 19:28:53.628635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.631 [2024-07-24 19:28:53.628706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.631 [2024-07-24 19:28:53.628730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.631 [2024-07-24 19:28:53.628741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.631 [2024-07-24 19:28:53.628753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.631 [2024-07-24 19:28:53.628772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.631 qpair failed and we were unable to recover it. 00:28:07.631 [2024-07-24 19:28:53.638685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.631 [2024-07-24 19:28:53.638800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.631 [2024-07-24 19:28:53.638819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.631 [2024-07-24 19:28:53.638829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.631 [2024-07-24 19:28:53.638838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.631 [2024-07-24 19:28:53.638857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.631 qpair failed and we were unable to recover it. 00:28:07.631 [2024-07-24 19:28:53.648675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.631 [2024-07-24 19:28:53.648822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.631 [2024-07-24 19:28:53.648840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.631 [2024-07-24 19:28:53.648850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.631 [2024-07-24 19:28:53.648859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.631 [2024-07-24 19:28:53.648878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.631 qpair failed and we were unable to recover it. 00:28:07.631 [2024-07-24 19:28:53.658722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.631 [2024-07-24 19:28:53.658798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.631 [2024-07-24 19:28:53.658816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.631 [2024-07-24 19:28:53.658825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.631 [2024-07-24 19:28:53.658834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.631 [2024-07-24 19:28:53.658852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.631 qpair failed and we were unable to recover it. 00:28:07.631 [2024-07-24 19:28:53.668762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.631 [2024-07-24 19:28:53.668838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.631 [2024-07-24 19:28:53.668855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.631 [2024-07-24 19:28:53.668865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.631 [2024-07-24 19:28:53.668874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.631 [2024-07-24 19:28:53.668892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.631 qpair failed and we were unable to recover it. 00:28:07.631 [2024-07-24 19:28:53.678848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.631 [2024-07-24 19:28:53.678924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.631 [2024-07-24 19:28:53.678941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.631 [2024-07-24 19:28:53.678951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.631 [2024-07-24 19:28:53.678960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.631 [2024-07-24 19:28:53.678979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.631 qpair failed and we were unable to recover it. 00:28:07.631 [2024-07-24 19:28:53.688792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.631 [2024-07-24 19:28:53.688867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.631 [2024-07-24 19:28:53.688884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.631 [2024-07-24 19:28:53.688893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.631 [2024-07-24 19:28:53.688902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.631 [2024-07-24 19:28:53.688920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.631 qpair failed and we were unable to recover it. 00:28:07.631 [2024-07-24 19:28:53.698835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.698917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.698934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.632 [2024-07-24 19:28:53.698944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.632 [2024-07-24 19:28:53.698952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.632 [2024-07-24 19:28:53.698970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.632 qpair failed and we were unable to recover it. 00:28:07.632 [2024-07-24 19:28:53.708893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.708994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.709012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.632 [2024-07-24 19:28:53.709021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.632 [2024-07-24 19:28:53.709030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.632 [2024-07-24 19:28:53.709048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.632 qpair failed and we were unable to recover it. 00:28:07.632 [2024-07-24 19:28:53.718918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.719030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.719048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.632 [2024-07-24 19:28:53.719061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.632 [2024-07-24 19:28:53.719070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.632 [2024-07-24 19:28:53.719089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.632 qpair failed and we were unable to recover it. 00:28:07.632 [2024-07-24 19:28:53.728952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.729037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.729054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.632 [2024-07-24 19:28:53.729064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.632 [2024-07-24 19:28:53.729073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.632 [2024-07-24 19:28:53.729091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.632 qpair failed and we were unable to recover it. 00:28:07.632 [2024-07-24 19:28:53.738922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.738992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.739009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.632 [2024-07-24 19:28:53.739018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.632 [2024-07-24 19:28:53.739027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.632 [2024-07-24 19:28:53.739045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.632 qpair failed and we were unable to recover it. 00:28:07.632 [2024-07-24 19:28:53.748979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.749055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.749072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.632 [2024-07-24 19:28:53.749081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.632 [2024-07-24 19:28:53.749090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.632 [2024-07-24 19:28:53.749108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.632 qpair failed and we were unable to recover it. 00:28:07.632 [2024-07-24 19:28:53.759010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.759085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.759102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.632 [2024-07-24 19:28:53.759112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.632 [2024-07-24 19:28:53.759120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.632 [2024-07-24 19:28:53.759138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.632 qpair failed and we were unable to recover it. 00:28:07.632 [2024-07-24 19:28:53.769014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.769138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.769156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.632 [2024-07-24 19:28:53.769166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.632 [2024-07-24 19:28:53.769175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.632 [2024-07-24 19:28:53.769193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.632 qpair failed and we were unable to recover it. 00:28:07.632 [2024-07-24 19:28:53.779065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.779141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.779158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.632 [2024-07-24 19:28:53.779168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.632 [2024-07-24 19:28:53.779176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.632 [2024-07-24 19:28:53.779193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.632 qpair failed and we were unable to recover it. 00:28:07.632 [2024-07-24 19:28:53.789100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.789177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.789196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.632 [2024-07-24 19:28:53.789206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.632 [2024-07-24 19:28:53.789215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.632 [2024-07-24 19:28:53.789233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.632 qpair failed and we were unable to recover it. 00:28:07.632 [2024-07-24 19:28:53.799110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.799187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.799205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.632 [2024-07-24 19:28:53.799215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.632 [2024-07-24 19:28:53.799224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.632 [2024-07-24 19:28:53.799242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.632 qpair failed and we were unable to recover it. 00:28:07.632 [2024-07-24 19:28:53.809066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.809143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.809163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.632 [2024-07-24 19:28:53.809173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.632 [2024-07-24 19:28:53.809182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.632 [2024-07-24 19:28:53.809199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.632 qpair failed and we were unable to recover it. 00:28:07.632 [2024-07-24 19:28:53.819204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.632 [2024-07-24 19:28:53.819303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.632 [2024-07-24 19:28:53.819321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.633 [2024-07-24 19:28:53.819330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.633 [2024-07-24 19:28:53.819339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.633 [2024-07-24 19:28:53.819357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.633 qpair failed and we were unable to recover it. 00:28:07.633 [2024-07-24 19:28:53.829203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.633 [2024-07-24 19:28:53.829282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.633 [2024-07-24 19:28:53.829299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.633 [2024-07-24 19:28:53.829309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.633 [2024-07-24 19:28:53.829317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.633 [2024-07-24 19:28:53.829335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.633 qpair failed and we were unable to recover it. 00:28:07.633 [2024-07-24 19:28:53.839250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.633 [2024-07-24 19:28:53.839344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.633 [2024-07-24 19:28:53.839362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.633 [2024-07-24 19:28:53.839372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.633 [2024-07-24 19:28:53.839381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.633 [2024-07-24 19:28:53.839399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.633 qpair failed and we were unable to recover it. 00:28:07.633 [2024-07-24 19:28:53.849307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.633 [2024-07-24 19:28:53.849392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.633 [2024-07-24 19:28:53.849410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.633 [2024-07-24 19:28:53.849419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.633 [2024-07-24 19:28:53.849428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.633 [2024-07-24 19:28:53.849450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.633 qpair failed and we were unable to recover it. 00:28:07.633 [2024-07-24 19:28:53.859332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.633 [2024-07-24 19:28:53.859411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.633 [2024-07-24 19:28:53.859428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.633 [2024-07-24 19:28:53.859438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.633 [2024-07-24 19:28:53.859447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.633 [2024-07-24 19:28:53.859465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.633 qpair failed and we were unable to recover it. 00:28:07.894 [2024-07-24 19:28:53.869365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.894 [2024-07-24 19:28:53.869478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.894 [2024-07-24 19:28:53.869496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.894 [2024-07-24 19:28:53.869506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.894 [2024-07-24 19:28:53.869515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.894 [2024-07-24 19:28:53.869534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.894 qpair failed and we were unable to recover it. 00:28:07.894 [2024-07-24 19:28:53.879431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.894 [2024-07-24 19:28:53.879506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.894 [2024-07-24 19:28:53.879523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.894 [2024-07-24 19:28:53.879533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.894 [2024-07-24 19:28:53.879541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.894 [2024-07-24 19:28:53.879559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.894 qpair failed and we were unable to recover it. 00:28:07.894 [2024-07-24 19:28:53.889360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.894 [2024-07-24 19:28:53.889437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.894 [2024-07-24 19:28:53.889454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.894 [2024-07-24 19:28:53.889464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.894 [2024-07-24 19:28:53.889473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.894 [2024-07-24 19:28:53.889490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.894 qpair failed and we were unable to recover it. 00:28:07.894 [2024-07-24 19:28:53.899404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.894 [2024-07-24 19:28:53.899478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.894 [2024-07-24 19:28:53.899498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.894 [2024-07-24 19:28:53.899508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.894 [2024-07-24 19:28:53.899517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.894 [2024-07-24 19:28:53.899535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.894 qpair failed and we were unable to recover it. 00:28:07.894 [2024-07-24 19:28:53.909370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.894 [2024-07-24 19:28:53.909445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.894 [2024-07-24 19:28:53.909463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.894 [2024-07-24 19:28:53.909473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.894 [2024-07-24 19:28:53.909481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.894 [2024-07-24 19:28:53.909499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.894 qpair failed and we were unable to recover it. 00:28:07.894 [2024-07-24 19:28:53.919460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.894 [2024-07-24 19:28:53.919572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.894 [2024-07-24 19:28:53.919598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.894 [2024-07-24 19:28:53.919608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.894 [2024-07-24 19:28:53.919617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.894 [2024-07-24 19:28:53.919636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.894 qpair failed and we were unable to recover it. 00:28:07.894 [2024-07-24 19:28:53.929442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.894 [2024-07-24 19:28:53.929571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.894 [2024-07-24 19:28:53.929590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.894 [2024-07-24 19:28:53.929600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.894 [2024-07-24 19:28:53.929609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.894 [2024-07-24 19:28:53.929627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.894 qpair failed and we were unable to recover it. 00:28:07.894 [2024-07-24 19:28:53.939512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.894 [2024-07-24 19:28:53.939592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.894 [2024-07-24 19:28:53.939609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.894 [2024-07-24 19:28:53.939619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.894 [2024-07-24 19:28:53.939633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.894 [2024-07-24 19:28:53.939652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.894 qpair failed and we were unable to recover it. 00:28:07.894 [2024-07-24 19:28:53.949525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.894 [2024-07-24 19:28:53.949608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.894 [2024-07-24 19:28:53.949625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.894 [2024-07-24 19:28:53.949635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.894 [2024-07-24 19:28:53.949643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.894 [2024-07-24 19:28:53.949660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.894 qpair failed and we were unable to recover it. 00:28:07.894 [2024-07-24 19:28:53.959583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.894 [2024-07-24 19:28:53.959740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.894 [2024-07-24 19:28:53.959758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.894 [2024-07-24 19:28:53.959768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.894 [2024-07-24 19:28:53.959777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.894 [2024-07-24 19:28:53.959796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.894 qpair failed and we were unable to recover it. 00:28:07.894 [2024-07-24 19:28:53.969593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.894 [2024-07-24 19:28:53.969675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.894 [2024-07-24 19:28:53.969693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.894 [2024-07-24 19:28:53.969702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.894 [2024-07-24 19:28:53.969711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.894 [2024-07-24 19:28:53.969733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.894 qpair failed and we were unable to recover it. 00:28:07.894 [2024-07-24 19:28:53.979673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.894 [2024-07-24 19:28:53.979749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:53.979766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:53.979776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.895 [2024-07-24 19:28:53.979784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.895 [2024-07-24 19:28:53.979802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.895 qpair failed and we were unable to recover it. 00:28:07.895 [2024-07-24 19:28:53.989669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.895 [2024-07-24 19:28:53.989751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:53.989769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:53.989779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.895 [2024-07-24 19:28:53.989787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.895 [2024-07-24 19:28:53.989805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.895 qpair failed and we were unable to recover it. 00:28:07.895 [2024-07-24 19:28:53.999703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.895 [2024-07-24 19:28:53.999783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:53.999800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:53.999810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.895 [2024-07-24 19:28:53.999818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.895 [2024-07-24 19:28:53.999836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.895 qpair failed and we were unable to recover it. 00:28:07.895 [2024-07-24 19:28:54.009706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.895 [2024-07-24 19:28:54.009837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:54.009856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:54.009865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.895 [2024-07-24 19:28:54.009874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.895 [2024-07-24 19:28:54.009892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.895 qpair failed and we were unable to recover it. 00:28:07.895 [2024-07-24 19:28:54.019746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.895 [2024-07-24 19:28:54.019830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:54.019847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:54.019857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.895 [2024-07-24 19:28:54.019865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.895 [2024-07-24 19:28:54.019883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.895 qpair failed and we were unable to recover it. 00:28:07.895 [2024-07-24 19:28:54.029816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.895 [2024-07-24 19:28:54.029938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:54.029957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:54.029968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.895 [2024-07-24 19:28:54.029980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.895 [2024-07-24 19:28:54.029998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.895 qpair failed and we were unable to recover it. 00:28:07.895 [2024-07-24 19:28:54.039832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.895 [2024-07-24 19:28:54.039950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:54.039968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:54.039978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.895 [2024-07-24 19:28:54.039987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.895 [2024-07-24 19:28:54.040006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.895 qpair failed and we were unable to recover it. 00:28:07.895 [2024-07-24 19:28:54.049784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.895 [2024-07-24 19:28:54.049871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:54.049888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:54.049897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.895 [2024-07-24 19:28:54.049906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.895 [2024-07-24 19:28:54.049924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.895 qpair failed and we were unable to recover it. 00:28:07.895 [2024-07-24 19:28:54.059906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.895 [2024-07-24 19:28:54.059984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:54.060001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:54.060011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.895 [2024-07-24 19:28:54.060020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.895 [2024-07-24 19:28:54.060037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.895 qpair failed and we were unable to recover it. 00:28:07.895 [2024-07-24 19:28:54.069911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.895 [2024-07-24 19:28:54.070016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:54.070034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:54.070044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.895 [2024-07-24 19:28:54.070053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.895 [2024-07-24 19:28:54.070071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.895 qpair failed and we were unable to recover it. 00:28:07.895 [2024-07-24 19:28:54.079848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.895 [2024-07-24 19:28:54.079923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:54.079941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:54.079950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.895 [2024-07-24 19:28:54.079959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.895 [2024-07-24 19:28:54.079976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.895 qpair failed and we were unable to recover it. 00:28:07.895 [2024-07-24 19:28:54.089938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.895 [2024-07-24 19:28:54.090015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:54.090032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:54.090042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.895 [2024-07-24 19:28:54.090050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.895 [2024-07-24 19:28:54.090068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.895 qpair failed and we were unable to recover it. 00:28:07.895 [2024-07-24 19:28:54.100063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.895 [2024-07-24 19:28:54.100142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.895 [2024-07-24 19:28:54.100160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.895 [2024-07-24 19:28:54.100170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.896 [2024-07-24 19:28:54.100179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.896 [2024-07-24 19:28:54.100197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.896 qpair failed and we were unable to recover it. 00:28:07.896 [2024-07-24 19:28:54.109947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.896 [2024-07-24 19:28:54.110018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.896 [2024-07-24 19:28:54.110036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.896 [2024-07-24 19:28:54.110046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.896 [2024-07-24 19:28:54.110054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.896 [2024-07-24 19:28:54.110072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.896 qpair failed and we were unable to recover it. 00:28:07.896 [2024-07-24 19:28:54.120117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.896 [2024-07-24 19:28:54.120190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.896 [2024-07-24 19:28:54.120208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.896 [2024-07-24 19:28:54.120221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.896 [2024-07-24 19:28:54.120230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.896 [2024-07-24 19:28:54.120249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.896 qpair failed and we were unable to recover it. 00:28:07.896 [2024-07-24 19:28:54.130070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.896 [2024-07-24 19:28:54.130153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.896 [2024-07-24 19:28:54.130170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.896 [2024-07-24 19:28:54.130179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.896 [2024-07-24 19:28:54.130188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:07.896 [2024-07-24 19:28:54.130205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.896 qpair failed and we were unable to recover it. 00:28:08.157 [2024-07-24 19:28:54.140085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.157 [2024-07-24 19:28:54.140173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.157 [2024-07-24 19:28:54.140191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.157 [2024-07-24 19:28:54.140201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.157 [2024-07-24 19:28:54.140210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.157 [2024-07-24 19:28:54.140228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.157 qpair failed and we were unable to recover it. 00:28:08.157 [2024-07-24 19:28:54.150155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.157 [2024-07-24 19:28:54.150269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.157 [2024-07-24 19:28:54.150287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.157 [2024-07-24 19:28:54.150298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.157 [2024-07-24 19:28:54.150307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.157 [2024-07-24 19:28:54.150325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.157 qpair failed and we were unable to recover it. 00:28:08.157 [2024-07-24 19:28:54.160136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.157 [2024-07-24 19:28:54.160229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.157 [2024-07-24 19:28:54.160246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.157 [2024-07-24 19:28:54.160256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.157 [2024-07-24 19:28:54.160265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.157 [2024-07-24 19:28:54.160284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.157 qpair failed and we were unable to recover it. 00:28:08.157 [2024-07-24 19:28:54.170162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.157 [2024-07-24 19:28:54.170238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.157 [2024-07-24 19:28:54.170256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.157 [2024-07-24 19:28:54.170265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.157 [2024-07-24 19:28:54.170274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.157 [2024-07-24 19:28:54.170292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.157 qpair failed and we were unable to recover it. 00:28:08.157 [2024-07-24 19:28:54.180197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.157 [2024-07-24 19:28:54.180282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.157 [2024-07-24 19:28:54.180298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.157 [2024-07-24 19:28:54.180308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.157 [2024-07-24 19:28:54.180316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.157 [2024-07-24 19:28:54.180333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.157 qpair failed and we were unable to recover it. 00:28:08.157 [2024-07-24 19:28:54.190157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.157 [2024-07-24 19:28:54.190234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.157 [2024-07-24 19:28:54.190251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.157 [2024-07-24 19:28:54.190261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.157 [2024-07-24 19:28:54.190270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.157 [2024-07-24 19:28:54.190288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.157 qpair failed and we were unable to recover it. 00:28:08.157 [2024-07-24 19:28:54.200255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.157 [2024-07-24 19:28:54.200343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.157 [2024-07-24 19:28:54.200360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.157 [2024-07-24 19:28:54.200369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.157 [2024-07-24 19:28:54.200378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.157 [2024-07-24 19:28:54.200396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.157 qpair failed and we were unable to recover it. 00:28:08.157 [2024-07-24 19:28:54.210276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.157 [2024-07-24 19:28:54.210353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.157 [2024-07-24 19:28:54.210373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.157 [2024-07-24 19:28:54.210383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.157 [2024-07-24 19:28:54.210392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.157 [2024-07-24 19:28:54.210410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.157 qpair failed and we were unable to recover it. 00:28:08.157 [2024-07-24 19:28:54.220338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.157 [2024-07-24 19:28:54.220460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.157 [2024-07-24 19:28:54.220477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.157 [2024-07-24 19:28:54.220488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.157 [2024-07-24 19:28:54.220497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.157 [2024-07-24 19:28:54.220514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.157 qpair failed and we were unable to recover it. 00:28:08.157 [2024-07-24 19:28:54.230352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.157 [2024-07-24 19:28:54.230440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.157 [2024-07-24 19:28:54.230457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.157 [2024-07-24 19:28:54.230467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.157 [2024-07-24 19:28:54.230475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.157 [2024-07-24 19:28:54.230494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.157 qpair failed and we were unable to recover it. 00:28:08.157 [2024-07-24 19:28:54.240394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.157 [2024-07-24 19:28:54.240471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.157 [2024-07-24 19:28:54.240488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.157 [2024-07-24 19:28:54.240498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.157 [2024-07-24 19:28:54.240506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.157 [2024-07-24 19:28:54.240524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.157 qpair failed and we were unable to recover it. 00:28:08.157 [2024-07-24 19:28:54.250444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.158 [2024-07-24 19:28:54.250567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.158 [2024-07-24 19:28:54.250585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.158 [2024-07-24 19:28:54.250595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.158 [2024-07-24 19:28:54.250604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.158 [2024-07-24 19:28:54.250625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.158 qpair failed and we were unable to recover it. 00:28:08.158 [2024-07-24 19:28:54.260423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.158 [2024-07-24 19:28:54.260503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.158 [2024-07-24 19:28:54.260520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.158 [2024-07-24 19:28:54.260529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.158 [2024-07-24 19:28:54.260539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.158 [2024-07-24 19:28:54.260556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.158 qpair failed and we were unable to recover it. 00:28:08.158 [2024-07-24 19:28:54.270463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.158 [2024-07-24 19:28:54.270540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.158 [2024-07-24 19:28:54.270559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.158 [2024-07-24 19:28:54.270569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.158 [2024-07-24 19:28:54.270578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.158 [2024-07-24 19:28:54.270596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.158 qpair failed and we were unable to recover it. 00:28:08.158 [2024-07-24 19:28:54.280508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.158 [2024-07-24 19:28:54.280582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.158 [2024-07-24 19:28:54.280600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.158 [2024-07-24 19:28:54.280610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.158 [2024-07-24 19:28:54.280618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.158 [2024-07-24 19:28:54.280636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.158 qpair failed and we were unable to recover it. 00:28:08.158 [2024-07-24 19:28:54.290495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.158 [2024-07-24 19:28:54.290584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.158 [2024-07-24 19:28:54.290601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.158 [2024-07-24 19:28:54.290611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.158 [2024-07-24 19:28:54.290619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.158 [2024-07-24 19:28:54.290637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.158 qpair failed and we were unable to recover it. 00:28:08.158 [2024-07-24 19:28:54.300539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.158 [2024-07-24 19:28:54.300616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.158 [2024-07-24 19:28:54.300637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.158 [2024-07-24 19:28:54.300646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.158 [2024-07-24 19:28:54.300655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.158 [2024-07-24 19:28:54.300673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.158 qpair failed and we were unable to recover it. 00:28:08.158 [2024-07-24 19:28:54.310566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.158 [2024-07-24 19:28:54.310639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.158 [2024-07-24 19:28:54.310657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.158 [2024-07-24 19:28:54.310667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.158 [2024-07-24 19:28:54.310676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.158 [2024-07-24 19:28:54.310694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.158 qpair failed and we were unable to recover it. 00:28:08.158 [2024-07-24 19:28:54.320594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.158 [2024-07-24 19:28:54.320699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.158 [2024-07-24 19:28:54.320722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.158 [2024-07-24 19:28:54.320732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.158 [2024-07-24 19:28:54.320742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.158 [2024-07-24 19:28:54.320760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.158 qpair failed and we were unable to recover it. 00:28:08.158 [2024-07-24 19:28:54.330623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.158 [2024-07-24 19:28:54.330705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.158 [2024-07-24 19:28:54.330729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.158 [2024-07-24 19:28:54.330739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.158 [2024-07-24 19:28:54.330748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.158 [2024-07-24 19:28:54.330766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.158 qpair failed and we were unable to recover it. 00:28:08.158 [2024-07-24 19:28:54.340626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.158 [2024-07-24 19:28:54.340705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.158 [2024-07-24 19:28:54.340727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.158 [2024-07-24 19:28:54.340737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.158 [2024-07-24 19:28:54.340746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.158 [2024-07-24 19:28:54.340767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.158 qpair failed and we were unable to recover it. 00:28:08.158 [2024-07-24 19:28:54.350679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.158 [2024-07-24 19:28:54.350865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.158 [2024-07-24 19:28:54.350885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.158 [2024-07-24 19:28:54.350897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.158 [2024-07-24 19:28:54.350906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.158 [2024-07-24 19:28:54.350927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.158 qpair failed and we were unable to recover it. 00:28:08.158 [2024-07-24 19:28:54.360698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.158 [2024-07-24 19:28:54.360780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.158 [2024-07-24 19:28:54.360798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.158 [2024-07-24 19:28:54.360808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.158 [2024-07-24 19:28:54.360817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.158 [2024-07-24 19:28:54.360836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.158 qpair failed and we were unable to recover it. 00:28:08.158 [2024-07-24 19:28:54.370726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.159 [2024-07-24 19:28:54.370819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.159 [2024-07-24 19:28:54.370836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.159 [2024-07-24 19:28:54.370846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.159 [2024-07-24 19:28:54.370855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.159 [2024-07-24 19:28:54.370874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.159 qpair failed and we were unable to recover it. 00:28:08.159 [2024-07-24 19:28:54.380795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.159 [2024-07-24 19:28:54.380874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.159 [2024-07-24 19:28:54.380892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.159 [2024-07-24 19:28:54.380902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.159 [2024-07-24 19:28:54.380911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.159 [2024-07-24 19:28:54.380930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.159 qpair failed and we were unable to recover it. 00:28:08.159 [2024-07-24 19:28:54.390820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.159 [2024-07-24 19:28:54.390897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.159 [2024-07-24 19:28:54.390915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.159 [2024-07-24 19:28:54.390924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.159 [2024-07-24 19:28:54.390933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.159 [2024-07-24 19:28:54.390951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.159 qpair failed and we were unable to recover it. 00:28:08.419 [2024-07-24 19:28:54.400799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.419 [2024-07-24 19:28:54.400870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.419 [2024-07-24 19:28:54.400888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.419 [2024-07-24 19:28:54.400898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.419 [2024-07-24 19:28:54.400906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.419 [2024-07-24 19:28:54.400924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.419 qpair failed and we were unable to recover it. 00:28:08.419 [2024-07-24 19:28:54.410821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.419 [2024-07-24 19:28:54.410898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.419 [2024-07-24 19:28:54.410917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.419 [2024-07-24 19:28:54.410927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.419 [2024-07-24 19:28:54.410935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.419 [2024-07-24 19:28:54.410953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.419 qpair failed and we were unable to recover it. 00:28:08.419 [2024-07-24 19:28:54.420861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.420934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.420952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.420 [2024-07-24 19:28:54.420961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.420 [2024-07-24 19:28:54.420970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.420 [2024-07-24 19:28:54.420988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.420 qpair failed and we were unable to recover it. 00:28:08.420 [2024-07-24 19:28:54.430892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.430966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.430984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.420 [2024-07-24 19:28:54.430994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.420 [2024-07-24 19:28:54.431006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.420 [2024-07-24 19:28:54.431024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.420 qpair failed and we were unable to recover it. 00:28:08.420 [2024-07-24 19:28:54.440909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.440991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.441009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.420 [2024-07-24 19:28:54.441019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.420 [2024-07-24 19:28:54.441028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.420 [2024-07-24 19:28:54.441046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.420 qpair failed and we were unable to recover it. 00:28:08.420 [2024-07-24 19:28:54.450937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.451013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.451031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.420 [2024-07-24 19:28:54.451041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.420 [2024-07-24 19:28:54.451050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.420 [2024-07-24 19:28:54.451068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.420 qpair failed and we were unable to recover it. 00:28:08.420 [2024-07-24 19:28:54.460969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.461046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.461063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.420 [2024-07-24 19:28:54.461073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.420 [2024-07-24 19:28:54.461082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.420 [2024-07-24 19:28:54.461100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.420 qpair failed and we were unable to recover it. 00:28:08.420 [2024-07-24 19:28:54.470947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.471048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.471065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.420 [2024-07-24 19:28:54.471075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.420 [2024-07-24 19:28:54.471084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.420 [2024-07-24 19:28:54.471102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.420 qpair failed and we were unable to recover it. 00:28:08.420 [2024-07-24 19:28:54.481032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.481108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.481126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.420 [2024-07-24 19:28:54.481136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.420 [2024-07-24 19:28:54.481144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.420 [2024-07-24 19:28:54.481162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.420 qpair failed and we were unable to recover it. 00:28:08.420 [2024-07-24 19:28:54.491092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.491169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.491186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.420 [2024-07-24 19:28:54.491196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.420 [2024-07-24 19:28:54.491205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.420 [2024-07-24 19:28:54.491223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.420 qpair failed and we were unable to recover it. 00:28:08.420 [2024-07-24 19:28:54.501177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.501256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.501273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.420 [2024-07-24 19:28:54.501282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.420 [2024-07-24 19:28:54.501291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.420 [2024-07-24 19:28:54.501310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.420 qpair failed and we were unable to recover it. 00:28:08.420 [2024-07-24 19:28:54.511116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.511213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.511231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.420 [2024-07-24 19:28:54.511241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.420 [2024-07-24 19:28:54.511250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.420 [2024-07-24 19:28:54.511268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.420 qpair failed and we were unable to recover it. 00:28:08.420 [2024-07-24 19:28:54.521100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.521176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.521194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.420 [2024-07-24 19:28:54.521208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.420 [2024-07-24 19:28:54.521217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.420 [2024-07-24 19:28:54.521234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.420 qpair failed and we were unable to recover it. 00:28:08.420 [2024-07-24 19:28:54.531110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.531190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.531208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.420 [2024-07-24 19:28:54.531218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.420 [2024-07-24 19:28:54.531226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.420 [2024-07-24 19:28:54.531245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.420 qpair failed and we were unable to recover it. 00:28:08.420 [2024-07-24 19:28:54.541135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.420 [2024-07-24 19:28:54.541260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.420 [2024-07-24 19:28:54.541278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.421 [2024-07-24 19:28:54.541288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.421 [2024-07-24 19:28:54.541297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.421 [2024-07-24 19:28:54.541314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.421 qpair failed and we were unable to recover it. 00:28:08.421 [2024-07-24 19:28:54.551158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.421 [2024-07-24 19:28:54.551236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.421 [2024-07-24 19:28:54.551253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.421 [2024-07-24 19:28:54.551263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.421 [2024-07-24 19:28:54.551272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.421 [2024-07-24 19:28:54.551290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.421 qpair failed and we were unable to recover it. 00:28:08.421 [2024-07-24 19:28:54.561182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.421 [2024-07-24 19:28:54.561263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.421 [2024-07-24 19:28:54.561280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.421 [2024-07-24 19:28:54.561290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.421 [2024-07-24 19:28:54.561299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.421 [2024-07-24 19:28:54.561317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.421 qpair failed and we were unable to recover it. 00:28:08.421 [2024-07-24 19:28:54.571219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.421 [2024-07-24 19:28:54.571294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.421 [2024-07-24 19:28:54.571312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.421 [2024-07-24 19:28:54.571321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.421 [2024-07-24 19:28:54.571330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.421 [2024-07-24 19:28:54.571348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.421 qpair failed and we were unable to recover it. 00:28:08.421 [2024-07-24 19:28:54.581329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.421 [2024-07-24 19:28:54.581411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.421 [2024-07-24 19:28:54.581428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.421 [2024-07-24 19:28:54.581438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.421 [2024-07-24 19:28:54.581446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.421 [2024-07-24 19:28:54.581464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.421 qpair failed and we were unable to recover it. 00:28:08.421 [2024-07-24 19:28:54.591318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.421 [2024-07-24 19:28:54.591394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.421 [2024-07-24 19:28:54.591411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.421 [2024-07-24 19:28:54.591421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.421 [2024-07-24 19:28:54.591430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.421 [2024-07-24 19:28:54.591448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.421 qpair failed and we were unable to recover it. 00:28:08.421 [2024-07-24 19:28:54.601331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.421 [2024-07-24 19:28:54.601457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.421 [2024-07-24 19:28:54.601474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.421 [2024-07-24 19:28:54.601484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.421 [2024-07-24 19:28:54.601492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.421 [2024-07-24 19:28:54.601510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.421 qpair failed and we were unable to recover it. 00:28:08.421 [2024-07-24 19:28:54.611334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.421 [2024-07-24 19:28:54.611411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.421 [2024-07-24 19:28:54.611428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.421 [2024-07-24 19:28:54.611441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.421 [2024-07-24 19:28:54.611450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.421 [2024-07-24 19:28:54.611468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.421 qpair failed and we were unable to recover it. 00:28:08.421 [2024-07-24 19:28:54.621354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.421 [2024-07-24 19:28:54.621431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.421 [2024-07-24 19:28:54.621448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.421 [2024-07-24 19:28:54.621458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.421 [2024-07-24 19:28:54.621466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.421 [2024-07-24 19:28:54.621484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.421 qpair failed and we were unable to recover it. 00:28:08.421 [2024-07-24 19:28:54.631510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.421 [2024-07-24 19:28:54.631623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.421 [2024-07-24 19:28:54.631640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.421 [2024-07-24 19:28:54.631649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.421 [2024-07-24 19:28:54.631658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.421 [2024-07-24 19:28:54.631676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.421 qpair failed and we were unable to recover it. 00:28:08.421 [2024-07-24 19:28:54.641475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.421 [2024-07-24 19:28:54.641552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.421 [2024-07-24 19:28:54.641569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.421 [2024-07-24 19:28:54.641580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.421 [2024-07-24 19:28:54.641589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.421 [2024-07-24 19:28:54.641607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.421 qpair failed and we were unable to recover it. 00:28:08.421 [2024-07-24 19:28:54.651488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.421 [2024-07-24 19:28:54.651564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.421 [2024-07-24 19:28:54.651581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.421 [2024-07-24 19:28:54.651591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.421 [2024-07-24 19:28:54.651600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.421 [2024-07-24 19:28:54.651618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.421 qpair failed and we were unable to recover it. 00:28:08.683 [2024-07-24 19:28:54.661542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.683 [2024-07-24 19:28:54.661620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.683 [2024-07-24 19:28:54.661637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.683 [2024-07-24 19:28:54.661647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.683 [2024-07-24 19:28:54.661656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.683 [2024-07-24 19:28:54.661675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.683 qpair failed and we were unable to recover it. 00:28:08.683 [2024-07-24 19:28:54.671489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.683 [2024-07-24 19:28:54.671569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.683 [2024-07-24 19:28:54.671586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.683 [2024-07-24 19:28:54.671596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.683 [2024-07-24 19:28:54.671605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.683 [2024-07-24 19:28:54.671623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.683 qpair failed and we were unable to recover it. 00:28:08.683 [2024-07-24 19:28:54.681599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.683 [2024-07-24 19:28:54.681672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.683 [2024-07-24 19:28:54.681690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.683 [2024-07-24 19:28:54.681700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.683 [2024-07-24 19:28:54.681709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.683 [2024-07-24 19:28:54.681730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.683 qpair failed and we were unable to recover it. 00:28:08.683 [2024-07-24 19:28:54.691609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.683 [2024-07-24 19:28:54.691687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.683 [2024-07-24 19:28:54.691704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.683 [2024-07-24 19:28:54.691718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.683 [2024-07-24 19:28:54.691727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.683 [2024-07-24 19:28:54.691745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.683 qpair failed and we were unable to recover it. 00:28:08.683 [2024-07-24 19:28:54.701632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.683 [2024-07-24 19:28:54.701719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.683 [2024-07-24 19:28:54.701740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.683 [2024-07-24 19:28:54.701750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.683 [2024-07-24 19:28:54.701759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.683 [2024-07-24 19:28:54.701777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.683 qpair failed and we were unable to recover it. 00:28:08.683 [2024-07-24 19:28:54.711710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.683 [2024-07-24 19:28:54.711790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.683 [2024-07-24 19:28:54.711807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.683 [2024-07-24 19:28:54.711817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.683 [2024-07-24 19:28:54.711826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.683 [2024-07-24 19:28:54.711844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.683 qpair failed and we were unable to recover it. 00:28:08.683 [2024-07-24 19:28:54.721635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.683 [2024-07-24 19:28:54.721706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.683 [2024-07-24 19:28:54.721727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.683 [2024-07-24 19:28:54.721737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.683 [2024-07-24 19:28:54.721746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.683 [2024-07-24 19:28:54.721764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.683 qpair failed and we were unable to recover it. 00:28:08.683 [2024-07-24 19:28:54.731721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.683 [2024-07-24 19:28:54.731800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.683 [2024-07-24 19:28:54.731818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.683 [2024-07-24 19:28:54.731827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.683 [2024-07-24 19:28:54.731836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.683 [2024-07-24 19:28:54.731854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.683 qpair failed and we were unable to recover it. 00:28:08.683 [2024-07-24 19:28:54.741760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.683 [2024-07-24 19:28:54.741914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.683 [2024-07-24 19:28:54.741931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.683 [2024-07-24 19:28:54.741940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.683 [2024-07-24 19:28:54.741949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.683 [2024-07-24 19:28:54.741971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.683 qpair failed and we were unable to recover it. 00:28:08.683 [2024-07-24 19:28:54.751794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.683 [2024-07-24 19:28:54.751873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.683 [2024-07-24 19:28:54.751891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.683 [2024-07-24 19:28:54.751901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.683 [2024-07-24 19:28:54.751910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.683 [2024-07-24 19:28:54.751928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.683 qpair failed and we were unable to recover it. 00:28:08.683 [2024-07-24 19:28:54.761813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.683 [2024-07-24 19:28:54.761887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.683 [2024-07-24 19:28:54.761904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.683 [2024-07-24 19:28:54.761914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.683 [2024-07-24 19:28:54.761923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.683 [2024-07-24 19:28:54.761942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.683 qpair failed and we were unable to recover it. 00:28:08.683 [2024-07-24 19:28:54.771828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.683 [2024-07-24 19:28:54.771907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.683 [2024-07-24 19:28:54.771925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.683 [2024-07-24 19:28:54.771936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.684 [2024-07-24 19:28:54.771944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.684 [2024-07-24 19:28:54.771962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.684 qpair failed and we were unable to recover it. 00:28:08.684 [2024-07-24 19:28:54.781871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.684 [2024-07-24 19:28:54.781949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.684 [2024-07-24 19:28:54.781966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.684 [2024-07-24 19:28:54.781976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.684 [2024-07-24 19:28:54.781985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.684 [2024-07-24 19:28:54.782003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.684 qpair failed and we were unable to recover it. 00:28:08.684 [2024-07-24 19:28:54.791916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.684 [2024-07-24 19:28:54.792067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.684 [2024-07-24 19:28:54.792087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.684 [2024-07-24 19:28:54.792097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.684 [2024-07-24 19:28:54.792106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.684 [2024-07-24 19:28:54.792123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.684 qpair failed and we were unable to recover it. 00:28:08.684 [2024-07-24 19:28:54.801943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.684 [2024-07-24 19:28:54.802021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.684 [2024-07-24 19:28:54.802038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.684 [2024-07-24 19:28:54.802048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.684 [2024-07-24 19:28:54.802056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.684 [2024-07-24 19:28:54.802075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.684 qpair failed and we were unable to recover it. 00:28:08.684 [2024-07-24 19:28:54.811964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.684 [2024-07-24 19:28:54.812043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.684 [2024-07-24 19:28:54.812061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.684 [2024-07-24 19:28:54.812071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.684 [2024-07-24 19:28:54.812079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.684 [2024-07-24 19:28:54.812097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.684 qpair failed and we were unable to recover it. 00:28:08.684 [2024-07-24 19:28:54.821994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.684 [2024-07-24 19:28:54.822090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.684 [2024-07-24 19:28:54.822107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.684 [2024-07-24 19:28:54.822116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.684 [2024-07-24 19:28:54.822125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.684 [2024-07-24 19:28:54.822143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.684 qpair failed and we were unable to recover it. 00:28:08.684 [2024-07-24 19:28:54.832057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.684 [2024-07-24 19:28:54.832135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.684 [2024-07-24 19:28:54.832153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.684 [2024-07-24 19:28:54.832163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.684 [2024-07-24 19:28:54.832175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.684 [2024-07-24 19:28:54.832192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.684 qpair failed and we were unable to recover it. 00:28:08.684 [2024-07-24 19:28:54.842045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.684 [2024-07-24 19:28:54.842121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.684 [2024-07-24 19:28:54.842139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.684 [2024-07-24 19:28:54.842149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.684 [2024-07-24 19:28:54.842157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.684 [2024-07-24 19:28:54.842175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.684 qpair failed and we were unable to recover it. 00:28:08.684 [2024-07-24 19:28:54.852005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.684 [2024-07-24 19:28:54.852109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.684 [2024-07-24 19:28:54.852126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.684 [2024-07-24 19:28:54.852136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.684 [2024-07-24 19:28:54.852145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.684 [2024-07-24 19:28:54.852163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.684 qpair failed and we were unable to recover it. 00:28:08.684 [2024-07-24 19:28:54.862111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.684 [2024-07-24 19:28:54.862187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.684 [2024-07-24 19:28:54.862204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.684 [2024-07-24 19:28:54.862214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.684 [2024-07-24 19:28:54.862223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.684 [2024-07-24 19:28:54.862241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.684 qpair failed and we were unable to recover it. 00:28:08.684 [2024-07-24 19:28:54.872071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.684 [2024-07-24 19:28:54.872152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.684 [2024-07-24 19:28:54.872169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.684 [2024-07-24 19:28:54.872179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.684 [2024-07-24 19:28:54.872188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.684 [2024-07-24 19:28:54.872206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.684 qpair failed and we were unable to recover it. 00:28:08.684 [2024-07-24 19:28:54.882091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.684 [2024-07-24 19:28:54.882186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.684 [2024-07-24 19:28:54.882204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.684 [2024-07-24 19:28:54.882213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.684 [2024-07-24 19:28:54.882222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.685 [2024-07-24 19:28:54.882240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.685 qpair failed and we were unable to recover it. 00:28:08.685 [2024-07-24 19:28:54.892209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.685 [2024-07-24 19:28:54.892297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.685 [2024-07-24 19:28:54.892315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.685 [2024-07-24 19:28:54.892325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.685 [2024-07-24 19:28:54.892334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.685 [2024-07-24 19:28:54.892351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.685 qpair failed and we were unable to recover it. 00:28:08.685 [2024-07-24 19:28:54.902174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.685 [2024-07-24 19:28:54.902252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.685 [2024-07-24 19:28:54.902270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.685 [2024-07-24 19:28:54.902280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.685 [2024-07-24 19:28:54.902289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.685 [2024-07-24 19:28:54.902307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.685 qpair failed and we were unable to recover it. 00:28:08.685 [2024-07-24 19:28:54.912246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.685 [2024-07-24 19:28:54.912322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.685 [2024-07-24 19:28:54.912340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.685 [2024-07-24 19:28:54.912350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.685 [2024-07-24 19:28:54.912358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.685 [2024-07-24 19:28:54.912376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.685 qpair failed and we were unable to recover it. 00:28:08.946 [2024-07-24 19:28:54.922290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.946 [2024-07-24 19:28:54.922365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.946 [2024-07-24 19:28:54.922382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.946 [2024-07-24 19:28:54.922396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.946 [2024-07-24 19:28:54.922405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.946 [2024-07-24 19:28:54.922423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.946 qpair failed and we were unable to recover it. 00:28:08.946 [2024-07-24 19:28:54.932330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.946 [2024-07-24 19:28:54.932408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.946 [2024-07-24 19:28:54.932425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.946 [2024-07-24 19:28:54.932435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.946 [2024-07-24 19:28:54.932444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.946 [2024-07-24 19:28:54.932462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.946 qpair failed and we were unable to recover it. 00:28:08.946 [2024-07-24 19:28:54.942271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.946 [2024-07-24 19:28:54.942365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.946 [2024-07-24 19:28:54.942383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.946 [2024-07-24 19:28:54.942394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.946 [2024-07-24 19:28:54.942404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.946 [2024-07-24 19:28:54.942423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.946 qpair failed and we were unable to recover it. 00:28:08.946 [2024-07-24 19:28:54.952385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.946 [2024-07-24 19:28:54.952460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.946 [2024-07-24 19:28:54.952477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.946 [2024-07-24 19:28:54.952486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.946 [2024-07-24 19:28:54.952495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.946 [2024-07-24 19:28:54.952512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.946 qpair failed and we were unable to recover it. 00:28:08.946 [2024-07-24 19:28:54.962408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.946 [2024-07-24 19:28:54.962512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.946 [2024-07-24 19:28:54.962529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.946 [2024-07-24 19:28:54.962539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.946 [2024-07-24 19:28:54.962548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.946 [2024-07-24 19:28:54.962566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.946 qpair failed and we were unable to recover it. 00:28:08.946 [2024-07-24 19:28:54.972418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.946 [2024-07-24 19:28:54.972493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.946 [2024-07-24 19:28:54.972510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.946 [2024-07-24 19:28:54.972520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.946 [2024-07-24 19:28:54.972529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.946 [2024-07-24 19:28:54.972546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.946 qpair failed and we were unable to recover it. 00:28:08.946 [2024-07-24 19:28:54.982464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.946 [2024-07-24 19:28:54.982553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.946 [2024-07-24 19:28:54.982571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.946 [2024-07-24 19:28:54.982580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.946 [2024-07-24 19:28:54.982589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.946 [2024-07-24 19:28:54.982607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.946 qpair failed and we were unable to recover it. 00:28:08.946 [2024-07-24 19:28:54.992512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.946 [2024-07-24 19:28:54.992582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.946 [2024-07-24 19:28:54.992600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.946 [2024-07-24 19:28:54.992610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.946 [2024-07-24 19:28:54.992618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.946 [2024-07-24 19:28:54.992637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.946 qpair failed and we were unable to recover it. 00:28:08.946 [2024-07-24 19:28:55.002523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.946 [2024-07-24 19:28:55.002637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.946 [2024-07-24 19:28:55.002654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.946 [2024-07-24 19:28:55.002664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.946 [2024-07-24 19:28:55.002672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.946 [2024-07-24 19:28:55.002690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.946 qpair failed and we were unable to recover it. 00:28:08.946 [2024-07-24 19:28:55.012497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.947 [2024-07-24 19:28:55.012571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.947 [2024-07-24 19:28:55.012590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.947 [2024-07-24 19:28:55.012603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.947 [2024-07-24 19:28:55.012611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.947 [2024-07-24 19:28:55.012629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.947 qpair failed and we were unable to recover it. 00:28:08.947 [2024-07-24 19:28:55.022573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.947 [2024-07-24 19:28:55.022652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.947 [2024-07-24 19:28:55.022670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.947 [2024-07-24 19:28:55.022680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.947 [2024-07-24 19:28:55.022689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.947 [2024-07-24 19:28:55.022706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.947 qpair failed and we were unable to recover it. 00:28:08.947 [2024-07-24 19:28:55.032596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.947 [2024-07-24 19:28:55.032745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.947 [2024-07-24 19:28:55.032762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.947 [2024-07-24 19:28:55.032772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.947 [2024-07-24 19:28:55.032780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.947 [2024-07-24 19:28:55.032799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.947 qpair failed and we were unable to recover it. 00:28:08.947 [2024-07-24 19:28:55.042602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.947 [2024-07-24 19:28:55.042695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.947 [2024-07-24 19:28:55.042712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.947 [2024-07-24 19:28:55.042726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.947 [2024-07-24 19:28:55.042734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.947 [2024-07-24 19:28:55.042752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.947 qpair failed and we were unable to recover it. 00:28:08.947 [2024-07-24 19:28:55.052615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.947 [2024-07-24 19:28:55.052705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.947 [2024-07-24 19:28:55.052725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.947 [2024-07-24 19:28:55.052735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.947 [2024-07-24 19:28:55.052744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.947 [2024-07-24 19:28:55.052762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.947 qpair failed and we were unable to recover it. 00:28:08.947 [2024-07-24 19:28:55.062687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.947 [2024-07-24 19:28:55.062814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.947 [2024-07-24 19:28:55.062832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.947 [2024-07-24 19:28:55.062841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.947 [2024-07-24 19:28:55.062850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.947 [2024-07-24 19:28:55.062868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.947 qpair failed and we were unable to recover it. 00:28:08.947 [2024-07-24 19:28:55.072732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.947 [2024-07-24 19:28:55.072807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.947 [2024-07-24 19:28:55.072824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.947 [2024-07-24 19:28:55.072834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.947 [2024-07-24 19:28:55.072842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.947 [2024-07-24 19:28:55.072860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.947 qpair failed and we were unable to recover it. 00:28:08.947 [2024-07-24 19:28:55.082745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.947 [2024-07-24 19:28:55.082822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.947 [2024-07-24 19:28:55.082839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.947 [2024-07-24 19:28:55.082849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.947 [2024-07-24 19:28:55.082858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.947 [2024-07-24 19:28:55.082877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.947 qpair failed and we were unable to recover it. 00:28:08.947 [2024-07-24 19:28:55.092838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.947 [2024-07-24 19:28:55.092926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.947 [2024-07-24 19:28:55.092943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.947 [2024-07-24 19:28:55.092952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.947 [2024-07-24 19:28:55.092961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.947 [2024-07-24 19:28:55.092979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.947 qpair failed and we were unable to recover it. 00:28:08.947 [2024-07-24 19:28:55.102813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.947 [2024-07-24 19:28:55.102916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.947 [2024-07-24 19:28:55.102937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.947 [2024-07-24 19:28:55.102946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.947 [2024-07-24 19:28:55.102955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.947 [2024-07-24 19:28:55.102972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.947 qpair failed and we were unable to recover it. 00:28:08.947 [2024-07-24 19:28:55.112832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.947 [2024-07-24 19:28:55.112902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.947 [2024-07-24 19:28:55.112920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.947 [2024-07-24 19:28:55.112930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.947 [2024-07-24 19:28:55.112939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.947 [2024-07-24 19:28:55.112957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.947 qpair failed and we were unable to recover it. 00:28:08.947 [2024-07-24 19:28:55.122881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.947 [2024-07-24 19:28:55.122984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.947 [2024-07-24 19:28:55.123001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.947 [2024-07-24 19:28:55.123011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.947 [2024-07-24 19:28:55.123019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.947 [2024-07-24 19:28:55.123037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.947 qpair failed and we were unable to recover it. 00:28:08.947 [2024-07-24 19:28:55.132868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.948 [2024-07-24 19:28:55.132945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.948 [2024-07-24 19:28:55.132962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.948 [2024-07-24 19:28:55.132972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.948 [2024-07-24 19:28:55.132981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.948 [2024-07-24 19:28:55.132998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.948 qpair failed and we were unable to recover it. 00:28:08.948 [2024-07-24 19:28:55.142913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.948 [2024-07-24 19:28:55.143002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.948 [2024-07-24 19:28:55.143019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.948 [2024-07-24 19:28:55.143029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.948 [2024-07-24 19:28:55.143038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.948 [2024-07-24 19:28:55.143058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.948 qpair failed and we were unable to recover it. 00:28:08.948 [2024-07-24 19:28:55.152958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.948 [2024-07-24 19:28:55.153063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.948 [2024-07-24 19:28:55.153081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.948 [2024-07-24 19:28:55.153090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.948 [2024-07-24 19:28:55.153099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.948 [2024-07-24 19:28:55.153117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.948 qpair failed and we were unable to recover it. 00:28:08.948 [2024-07-24 19:28:55.163001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.948 [2024-07-24 19:28:55.163078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.948 [2024-07-24 19:28:55.163095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.948 [2024-07-24 19:28:55.163105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.948 [2024-07-24 19:28:55.163114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.948 [2024-07-24 19:28:55.163131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.948 qpair failed and we were unable to recover it. 00:28:08.948 [2024-07-24 19:28:55.172921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.948 [2024-07-24 19:28:55.172997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.948 [2024-07-24 19:28:55.173014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.948 [2024-07-24 19:28:55.173024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.948 [2024-07-24 19:28:55.173032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.948 [2024-07-24 19:28:55.173050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.948 qpair failed and we were unable to recover it. 00:28:08.948 [2024-07-24 19:28:55.183024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.948 [2024-07-24 19:28:55.183146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.948 [2024-07-24 19:28:55.183163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.948 [2024-07-24 19:28:55.183173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.948 [2024-07-24 19:28:55.183181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:08.948 [2024-07-24 19:28:55.183200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.948 qpair failed and we were unable to recover it. 00:28:09.208 [2024-07-24 19:28:55.193047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.208 [2024-07-24 19:28:55.193121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.208 [2024-07-24 19:28:55.193142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.208 [2024-07-24 19:28:55.193151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.208 [2024-07-24 19:28:55.193160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.208 [2024-07-24 19:28:55.193178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.208 qpair failed and we were unable to recover it. 00:28:09.208 [2024-07-24 19:28:55.203057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.208 [2024-07-24 19:28:55.203136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.208 [2024-07-24 19:28:55.203153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.208 [2024-07-24 19:28:55.203163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.208 [2024-07-24 19:28:55.203171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.208 [2024-07-24 19:28:55.203189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.208 qpair failed and we were unable to recover it. 00:28:09.208 [2024-07-24 19:28:55.213084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.208 [2024-07-24 19:28:55.213173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.208 [2024-07-24 19:28:55.213189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.208 [2024-07-24 19:28:55.213199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.208 [2024-07-24 19:28:55.213208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.208 [2024-07-24 19:28:55.213226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.208 qpair failed and we were unable to recover it. 00:28:09.208 [2024-07-24 19:28:55.223132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.208 [2024-07-24 19:28:55.223211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.208 [2024-07-24 19:28:55.223228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.208 [2024-07-24 19:28:55.223238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.208 [2024-07-24 19:28:55.223246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.208 [2024-07-24 19:28:55.223264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.208 qpair failed and we were unable to recover it. 00:28:09.208 [2024-07-24 19:28:55.233170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.208 [2024-07-24 19:28:55.233272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.208 [2024-07-24 19:28:55.233290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.208 [2024-07-24 19:28:55.233300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.208 [2024-07-24 19:28:55.233311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.208 [2024-07-24 19:28:55.233329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.208 qpair failed and we were unable to recover it. 00:28:09.208 [2024-07-24 19:28:55.243195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.209 [2024-07-24 19:28:55.243280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.209 [2024-07-24 19:28:55.243296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.209 [2024-07-24 19:28:55.243306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.209 [2024-07-24 19:28:55.243314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.209 [2024-07-24 19:28:55.243332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.209 qpair failed and we were unable to recover it. 00:28:09.209 [2024-07-24 19:28:55.253199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.209 [2024-07-24 19:28:55.253271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.209 [2024-07-24 19:28:55.253288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.209 [2024-07-24 19:28:55.253298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.209 [2024-07-24 19:28:55.253306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.209 [2024-07-24 19:28:55.253324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.209 qpair failed and we were unable to recover it. 00:28:09.209 [2024-07-24 19:28:55.263236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.209 [2024-07-24 19:28:55.263310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.209 [2024-07-24 19:28:55.263328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.209 [2024-07-24 19:28:55.263337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.209 [2024-07-24 19:28:55.263346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.209 [2024-07-24 19:28:55.263363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.209 qpair failed and we were unable to recover it. 00:28:09.209 [2024-07-24 19:28:55.273265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.209 [2024-07-24 19:28:55.273343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.209 [2024-07-24 19:28:55.273360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.209 [2024-07-24 19:28:55.273370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.209 [2024-07-24 19:28:55.273378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.209 [2024-07-24 19:28:55.273396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.209 qpair failed and we were unable to recover it. 00:28:09.209 [2024-07-24 19:28:55.283287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.209 [2024-07-24 19:28:55.283361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.209 [2024-07-24 19:28:55.283379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.209 [2024-07-24 19:28:55.283389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.209 [2024-07-24 19:28:55.283397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.209 [2024-07-24 19:28:55.283415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.209 qpair failed and we were unable to recover it. 00:28:09.209 [2024-07-24 19:28:55.293325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.209 [2024-07-24 19:28:55.293408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.209 [2024-07-24 19:28:55.293425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.209 [2024-07-24 19:28:55.293435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.209 [2024-07-24 19:28:55.293443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.209 [2024-07-24 19:28:55.293461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.209 qpair failed and we were unable to recover it. 00:28:09.209 [2024-07-24 19:28:55.303336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.209 [2024-07-24 19:28:55.303414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.209 [2024-07-24 19:28:55.303432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.209 [2024-07-24 19:28:55.303441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.209 [2024-07-24 19:28:55.303450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.209 [2024-07-24 19:28:55.303468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.209 qpair failed and we were unable to recover it. 00:28:09.209 [2024-07-24 19:28:55.313366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.209 [2024-07-24 19:28:55.313453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.209 [2024-07-24 19:28:55.313471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.209 [2024-07-24 19:28:55.313480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.209 [2024-07-24 19:28:55.313489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.209 [2024-07-24 19:28:55.313507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.209 qpair failed and we were unable to recover it. 00:28:09.209 [2024-07-24 19:28:55.323388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.209 [2024-07-24 19:28:55.323466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.209 [2024-07-24 19:28:55.323483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.209 [2024-07-24 19:28:55.323493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.209 [2024-07-24 19:28:55.323504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.209 [2024-07-24 19:28:55.323522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.209 qpair failed and we were unable to recover it. 00:28:09.209 [2024-07-24 19:28:55.333437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.209 [2024-07-24 19:28:55.333513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.209 [2024-07-24 19:28:55.333530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.209 [2024-07-24 19:28:55.333540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.209 [2024-07-24 19:28:55.333549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.209 [2024-07-24 19:28:55.333566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.209 qpair failed and we were unable to recover it. 00:28:09.209 [2024-07-24 19:28:55.343430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.209 [2024-07-24 19:28:55.343502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.209 [2024-07-24 19:28:55.343519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.209 [2024-07-24 19:28:55.343529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.209 [2024-07-24 19:28:55.343538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.209 [2024-07-24 19:28:55.343555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.209 qpair failed and we were unable to recover it. 00:28:09.209 [2024-07-24 19:28:55.353484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.209 [2024-07-24 19:28:55.353558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.209 [2024-07-24 19:28:55.353576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.209 [2024-07-24 19:28:55.353586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.209 [2024-07-24 19:28:55.353595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.209 [2024-07-24 19:28:55.353612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.209 qpair failed and we were unable to recover it. 00:28:09.209 [2024-07-24 19:28:55.363512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.210 [2024-07-24 19:28:55.363602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.210 [2024-07-24 19:28:55.363619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.210 [2024-07-24 19:28:55.363629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.210 [2024-07-24 19:28:55.363637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.210 [2024-07-24 19:28:55.363655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.210 qpair failed and we were unable to recover it. 00:28:09.210 [2024-07-24 19:28:55.373520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.210 [2024-07-24 19:28:55.373597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.210 [2024-07-24 19:28:55.373615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.210 [2024-07-24 19:28:55.373624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.210 [2024-07-24 19:28:55.373633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.210 [2024-07-24 19:28:55.373651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.210 qpair failed and we were unable to recover it. 00:28:09.210 [2024-07-24 19:28:55.383581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.210 [2024-07-24 19:28:55.383656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.210 [2024-07-24 19:28:55.383674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.210 [2024-07-24 19:28:55.383684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.210 [2024-07-24 19:28:55.383692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.210 [2024-07-24 19:28:55.383710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.210 qpair failed and we were unable to recover it. 00:28:09.210 [2024-07-24 19:28:55.393641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.210 [2024-07-24 19:28:55.393758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.210 [2024-07-24 19:28:55.393776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.210 [2024-07-24 19:28:55.393785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.210 [2024-07-24 19:28:55.393794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.210 [2024-07-24 19:28:55.393812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.210 qpair failed and we were unable to recover it. 00:28:09.210 [2024-07-24 19:28:55.403623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.210 [2024-07-24 19:28:55.403698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.210 [2024-07-24 19:28:55.403729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.210 [2024-07-24 19:28:55.403739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.210 [2024-07-24 19:28:55.403748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.210 [2024-07-24 19:28:55.403766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.210 qpair failed and we were unable to recover it. 00:28:09.210 [2024-07-24 19:28:55.413637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.210 [2024-07-24 19:28:55.413719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.210 [2024-07-24 19:28:55.413738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.210 [2024-07-24 19:28:55.413753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.210 [2024-07-24 19:28:55.413762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.210 [2024-07-24 19:28:55.413780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.210 qpair failed and we were unable to recover it. 00:28:09.210 [2024-07-24 19:28:55.423689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.210 [2024-07-24 19:28:55.423800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.210 [2024-07-24 19:28:55.423818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.210 [2024-07-24 19:28:55.423827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.210 [2024-07-24 19:28:55.423836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.210 [2024-07-24 19:28:55.423855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.210 qpair failed and we were unable to recover it. 00:28:09.210 [2024-07-24 19:28:55.433723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.210 [2024-07-24 19:28:55.433797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.210 [2024-07-24 19:28:55.433815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.210 [2024-07-24 19:28:55.433825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.210 [2024-07-24 19:28:55.433833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.210 [2024-07-24 19:28:55.433852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.210 qpair failed and we were unable to recover it. 00:28:09.210 [2024-07-24 19:28:55.443746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.210 [2024-07-24 19:28:55.443851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.210 [2024-07-24 19:28:55.443868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.210 [2024-07-24 19:28:55.443878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.210 [2024-07-24 19:28:55.443887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.210 [2024-07-24 19:28:55.443905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.210 qpair failed and we were unable to recover it. 00:28:09.471 [2024-07-24 19:28:55.453739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.471 [2024-07-24 19:28:55.453812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.471 [2024-07-24 19:28:55.453830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.471 [2024-07-24 19:28:55.453841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.471 [2024-07-24 19:28:55.453850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.471 [2024-07-24 19:28:55.453869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.471 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.463798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.472 [2024-07-24 19:28:55.463877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.472 [2024-07-24 19:28:55.463894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.472 [2024-07-24 19:28:55.463904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.472 [2024-07-24 19:28:55.463913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.472 [2024-07-24 19:28:55.463931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.472 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.473817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.472 [2024-07-24 19:28:55.473899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.472 [2024-07-24 19:28:55.473917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.472 [2024-07-24 19:28:55.473926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.472 [2024-07-24 19:28:55.473935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.472 [2024-07-24 19:28:55.473953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.472 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.483786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.472 [2024-07-24 19:28:55.483932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.472 [2024-07-24 19:28:55.483949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.472 [2024-07-24 19:28:55.483958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.472 [2024-07-24 19:28:55.483967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.472 [2024-07-24 19:28:55.483985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.472 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.493856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.472 [2024-07-24 19:28:55.493930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.472 [2024-07-24 19:28:55.493947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.472 [2024-07-24 19:28:55.493958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.472 [2024-07-24 19:28:55.493966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.472 [2024-07-24 19:28:55.493984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.472 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.503931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.472 [2024-07-24 19:28:55.504039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.472 [2024-07-24 19:28:55.504059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.472 [2024-07-24 19:28:55.504069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.472 [2024-07-24 19:28:55.504077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.472 [2024-07-24 19:28:55.504095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.472 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.513939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.472 [2024-07-24 19:28:55.514014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.472 [2024-07-24 19:28:55.514032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.472 [2024-07-24 19:28:55.514041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.472 [2024-07-24 19:28:55.514050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.472 [2024-07-24 19:28:55.514067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.472 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.523958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.472 [2024-07-24 19:28:55.524043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.472 [2024-07-24 19:28:55.524060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.472 [2024-07-24 19:28:55.524070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.472 [2024-07-24 19:28:55.524078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.472 [2024-07-24 19:28:55.524096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.472 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.533979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.472 [2024-07-24 19:28:55.534101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.472 [2024-07-24 19:28:55.534118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.472 [2024-07-24 19:28:55.534127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.472 [2024-07-24 19:28:55.534136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.472 [2024-07-24 19:28:55.534154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.472 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.544002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.472 [2024-07-24 19:28:55.544077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.472 [2024-07-24 19:28:55.544095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.472 [2024-07-24 19:28:55.544104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.472 [2024-07-24 19:28:55.544113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.472 [2024-07-24 19:28:55.544134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.472 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.554047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.472 [2024-07-24 19:28:55.554141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.472 [2024-07-24 19:28:55.554157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.472 [2024-07-24 19:28:55.554167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.472 [2024-07-24 19:28:55.554175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.472 [2024-07-24 19:28:55.554193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.472 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.564081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.472 [2024-07-24 19:28:55.564155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.472 [2024-07-24 19:28:55.564172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.472 [2024-07-24 19:28:55.564182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.472 [2024-07-24 19:28:55.564190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.472 [2024-07-24 19:28:55.564208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.472 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.574093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.472 [2024-07-24 19:28:55.574171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.472 [2024-07-24 19:28:55.574189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.472 [2024-07-24 19:28:55.574199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.472 [2024-07-24 19:28:55.574207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.472 [2024-07-24 19:28:55.574225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.472 qpair failed and we were unable to recover it. 00:28:09.472 [2024-07-24 19:28:55.584165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.584274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.584291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.473 [2024-07-24 19:28:55.584301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.473 [2024-07-24 19:28:55.584309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.473 [2024-07-24 19:28:55.584328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.473 qpair failed and we were unable to recover it. 00:28:09.473 [2024-07-24 19:28:55.594172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.594247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.594268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.473 [2024-07-24 19:28:55.594277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.473 [2024-07-24 19:28:55.594286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.473 [2024-07-24 19:28:55.594304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.473 qpair failed and we were unable to recover it. 00:28:09.473 [2024-07-24 19:28:55.604106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.604190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.604208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.473 [2024-07-24 19:28:55.604218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.473 [2024-07-24 19:28:55.604226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.473 [2024-07-24 19:28:55.604244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.473 qpair failed and we were unable to recover it. 00:28:09.473 [2024-07-24 19:28:55.614201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.614295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.614312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.473 [2024-07-24 19:28:55.614322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.473 [2024-07-24 19:28:55.614330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.473 [2024-07-24 19:28:55.614348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.473 qpair failed and we were unable to recover it. 00:28:09.473 [2024-07-24 19:28:55.624243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.624320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.624337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.473 [2024-07-24 19:28:55.624347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.473 [2024-07-24 19:28:55.624356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.473 [2024-07-24 19:28:55.624373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.473 qpair failed and we were unable to recover it. 00:28:09.473 [2024-07-24 19:28:55.634253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.634338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.634355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.473 [2024-07-24 19:28:55.634365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.473 [2024-07-24 19:28:55.634377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.473 [2024-07-24 19:28:55.634394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.473 qpair failed and we were unable to recover it. 00:28:09.473 [2024-07-24 19:28:55.644266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.644340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.644357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.473 [2024-07-24 19:28:55.644367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.473 [2024-07-24 19:28:55.644376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.473 [2024-07-24 19:28:55.644393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.473 qpair failed and we were unable to recover it. 00:28:09.473 [2024-07-24 19:28:55.654290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.654362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.654379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.473 [2024-07-24 19:28:55.654389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.473 [2024-07-24 19:28:55.654398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.473 [2024-07-24 19:28:55.654415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.473 qpair failed and we were unable to recover it. 00:28:09.473 [2024-07-24 19:28:55.664350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.664466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.664483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.473 [2024-07-24 19:28:55.664493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.473 [2024-07-24 19:28:55.664502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.473 [2024-07-24 19:28:55.664520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.473 qpair failed and we were unable to recover it. 00:28:09.473 [2024-07-24 19:28:55.674302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.674383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.674401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.473 [2024-07-24 19:28:55.674411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.473 [2024-07-24 19:28:55.674420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.473 [2024-07-24 19:28:55.674437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.473 qpair failed and we were unable to recover it. 00:28:09.473 [2024-07-24 19:28:55.684427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.684541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.684558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.473 [2024-07-24 19:28:55.684568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.473 [2024-07-24 19:28:55.684576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.473 [2024-07-24 19:28:55.684594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.473 qpair failed and we were unable to recover it. 00:28:09.473 [2024-07-24 19:28:55.694414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.694489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.694507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.473 [2024-07-24 19:28:55.694517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.473 [2024-07-24 19:28:55.694525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.473 [2024-07-24 19:28:55.694543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.473 qpair failed and we were unable to recover it. 00:28:09.473 [2024-07-24 19:28:55.704474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.473 [2024-07-24 19:28:55.704551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.473 [2024-07-24 19:28:55.704568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.474 [2024-07-24 19:28:55.704578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.474 [2024-07-24 19:28:55.704587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.474 [2024-07-24 19:28:55.704605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.474 qpair failed and we were unable to recover it. 00:28:09.735 [2024-07-24 19:28:55.714565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.735 [2024-07-24 19:28:55.714639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.735 [2024-07-24 19:28:55.714657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.735 [2024-07-24 19:28:55.714667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.735 [2024-07-24 19:28:55.714676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.735 [2024-07-24 19:28:55.714694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.735 qpair failed and we were unable to recover it. 00:28:09.735 [2024-07-24 19:28:55.724530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.735 [2024-07-24 19:28:55.724606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.735 [2024-07-24 19:28:55.724624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.735 [2024-07-24 19:28:55.724633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.735 [2024-07-24 19:28:55.724645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.735 [2024-07-24 19:28:55.724664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.735 qpair failed and we were unable to recover it. 00:28:09.735 [2024-07-24 19:28:55.734584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.735 [2024-07-24 19:28:55.734659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.735 [2024-07-24 19:28:55.734676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.735 [2024-07-24 19:28:55.734686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.735 [2024-07-24 19:28:55.734695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.735 [2024-07-24 19:28:55.734713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.735 qpair failed and we were unable to recover it. 00:28:09.735 [2024-07-24 19:28:55.744602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.735 [2024-07-24 19:28:55.744713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.735 [2024-07-24 19:28:55.744733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.735 [2024-07-24 19:28:55.744743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.735 [2024-07-24 19:28:55.744751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.735 [2024-07-24 19:28:55.744769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.735 qpair failed and we were unable to recover it. 00:28:09.735 [2024-07-24 19:28:55.754626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.735 [2024-07-24 19:28:55.754701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.735 [2024-07-24 19:28:55.754723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.735 [2024-07-24 19:28:55.754733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.735 [2024-07-24 19:28:55.754741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.735 [2024-07-24 19:28:55.754760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.735 qpair failed and we were unable to recover it. 00:28:09.735 [2024-07-24 19:28:55.764669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.735 [2024-07-24 19:28:55.764749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.735 [2024-07-24 19:28:55.764767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.735 [2024-07-24 19:28:55.764777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.735 [2024-07-24 19:28:55.764786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.735 [2024-07-24 19:28:55.764804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.735 qpair failed and we were unable to recover it. 00:28:09.735 [2024-07-24 19:28:55.774656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.735 [2024-07-24 19:28:55.774735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.735 [2024-07-24 19:28:55.774753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.735 [2024-07-24 19:28:55.774762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.735 [2024-07-24 19:28:55.774771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.774789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.784683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.736 [2024-07-24 19:28:55.784768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.736 [2024-07-24 19:28:55.784785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.736 [2024-07-24 19:28:55.784795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.736 [2024-07-24 19:28:55.784804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.784822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.794731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.736 [2024-07-24 19:28:55.794837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.736 [2024-07-24 19:28:55.794854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.736 [2024-07-24 19:28:55.794864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.736 [2024-07-24 19:28:55.794873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.794891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.804673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.736 [2024-07-24 19:28:55.804754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.736 [2024-07-24 19:28:55.804772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.736 [2024-07-24 19:28:55.804782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.736 [2024-07-24 19:28:55.804791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.804809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.814705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.736 [2024-07-24 19:28:55.814784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.736 [2024-07-24 19:28:55.814802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.736 [2024-07-24 19:28:55.814815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.736 [2024-07-24 19:28:55.814824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.814842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.824824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.736 [2024-07-24 19:28:55.824938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.736 [2024-07-24 19:28:55.824957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.736 [2024-07-24 19:28:55.824967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.736 [2024-07-24 19:28:55.824975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.824993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.834782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.736 [2024-07-24 19:28:55.834856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.736 [2024-07-24 19:28:55.834873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.736 [2024-07-24 19:28:55.834883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.736 [2024-07-24 19:28:55.834892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.834910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.844851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.736 [2024-07-24 19:28:55.844947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.736 [2024-07-24 19:28:55.844964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.736 [2024-07-24 19:28:55.844974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.736 [2024-07-24 19:28:55.844982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.845000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.854879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.736 [2024-07-24 19:28:55.854977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.736 [2024-07-24 19:28:55.854994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.736 [2024-07-24 19:28:55.855003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.736 [2024-07-24 19:28:55.855012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.855030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.864843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.736 [2024-07-24 19:28:55.864931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.736 [2024-07-24 19:28:55.864948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.736 [2024-07-24 19:28:55.864958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.736 [2024-07-24 19:28:55.864967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.864985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.874971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.736 [2024-07-24 19:28:55.875085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.736 [2024-07-24 19:28:55.875102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.736 [2024-07-24 19:28:55.875111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.736 [2024-07-24 19:28:55.875119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.875138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.884984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.736 [2024-07-24 19:28:55.885058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.736 [2024-07-24 19:28:55.885075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.736 [2024-07-24 19:28:55.885085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.736 [2024-07-24 19:28:55.885094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.885111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.895017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.736 [2024-07-24 19:28:55.895107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.736 [2024-07-24 19:28:55.895124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.736 [2024-07-24 19:28:55.895134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.736 [2024-07-24 19:28:55.895143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.736 [2024-07-24 19:28:55.895161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.736 qpair failed and we were unable to recover it. 00:28:09.736 [2024-07-24 19:28:55.905026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.737 [2024-07-24 19:28:55.905103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.737 [2024-07-24 19:28:55.905123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.737 [2024-07-24 19:28:55.905133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.737 [2024-07-24 19:28:55.905141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.737 [2024-07-24 19:28:55.905159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.737 qpair failed and we were unable to recover it. 00:28:09.737 [2024-07-24 19:28:55.915095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.737 [2024-07-24 19:28:55.915207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.737 [2024-07-24 19:28:55.915225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.737 [2024-07-24 19:28:55.915235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.737 [2024-07-24 19:28:55.915243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.737 [2024-07-24 19:28:55.915262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.737 qpair failed and we were unable to recover it. 00:28:09.737 [2024-07-24 19:28:55.925085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.737 [2024-07-24 19:28:55.925158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.737 [2024-07-24 19:28:55.925176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.737 [2024-07-24 19:28:55.925186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.737 [2024-07-24 19:28:55.925195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.737 [2024-07-24 19:28:55.925213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.737 qpair failed and we were unable to recover it. 00:28:09.737 [2024-07-24 19:28:55.935102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.737 [2024-07-24 19:28:55.935190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.737 [2024-07-24 19:28:55.935208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.737 [2024-07-24 19:28:55.935217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.737 [2024-07-24 19:28:55.935226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.737 [2024-07-24 19:28:55.935244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.737 qpair failed and we were unable to recover it. 00:28:09.737 [2024-07-24 19:28:55.945186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.737 [2024-07-24 19:28:55.945338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.737 [2024-07-24 19:28:55.945355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.737 [2024-07-24 19:28:55.945365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.737 [2024-07-24 19:28:55.945373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.737 [2024-07-24 19:28:55.945394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.737 qpair failed and we were unable to recover it. 00:28:09.737 [2024-07-24 19:28:55.955174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.737 [2024-07-24 19:28:55.955252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.737 [2024-07-24 19:28:55.955270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.737 [2024-07-24 19:28:55.955280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.737 [2024-07-24 19:28:55.955289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.737 [2024-07-24 19:28:55.955307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.737 qpair failed and we were unable to recover it. 00:28:09.737 [2024-07-24 19:28:55.965174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.737 [2024-07-24 19:28:55.965252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.737 [2024-07-24 19:28:55.965270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.737 [2024-07-24 19:28:55.965280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.737 [2024-07-24 19:28:55.965289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.737 [2024-07-24 19:28:55.965307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.737 qpair failed and we were unable to recover it. 00:28:09.998 [2024-07-24 19:28:55.975254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.998 [2024-07-24 19:28:55.975361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.998 [2024-07-24 19:28:55.975378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.998 [2024-07-24 19:28:55.975388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.998 [2024-07-24 19:28:55.975397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.998 [2024-07-24 19:28:55.975415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.998 qpair failed and we were unable to recover it. 00:28:09.998 [2024-07-24 19:28:55.985243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.998 [2024-07-24 19:28:55.985317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.998 [2024-07-24 19:28:55.985334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.998 [2024-07-24 19:28:55.985345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.998 [2024-07-24 19:28:55.985354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.998 [2024-07-24 19:28:55.985372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.998 qpair failed and we were unable to recover it. 00:28:09.998 [2024-07-24 19:28:55.995276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.998 [2024-07-24 19:28:55.995431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.998 [2024-07-24 19:28:55.995451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.998 [2024-07-24 19:28:55.995460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.998 [2024-07-24 19:28:55.995469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.998 [2024-07-24 19:28:55.995488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.998 qpair failed and we were unable to recover it. 00:28:09.998 [2024-07-24 19:28:56.005342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.998 [2024-07-24 19:28:56.005455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.998 [2024-07-24 19:28:56.005474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.998 [2024-07-24 19:28:56.005486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.998 [2024-07-24 19:28:56.005496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.998 [2024-07-24 19:28:56.005514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.998 qpair failed and we were unable to recover it. 00:28:09.998 [2024-07-24 19:28:56.015320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.998 [2024-07-24 19:28:56.015404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.998 [2024-07-24 19:28:56.015422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.998 [2024-07-24 19:28:56.015432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.998 [2024-07-24 19:28:56.015441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.998 [2024-07-24 19:28:56.015458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.998 qpair failed and we were unable to recover it. 00:28:09.998 [2024-07-24 19:28:56.025282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.998 [2024-07-24 19:28:56.025360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.998 [2024-07-24 19:28:56.025378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.998 [2024-07-24 19:28:56.025388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.998 [2024-07-24 19:28:56.025396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.998 [2024-07-24 19:28:56.025415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.998 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.035318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.035397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.999 [2024-07-24 19:28:56.035414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.999 [2024-07-24 19:28:56.035424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.999 [2024-07-24 19:28:56.035433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.999 [2024-07-24 19:28:56.035454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.999 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.045392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.045487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.999 [2024-07-24 19:28:56.045505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.999 [2024-07-24 19:28:56.045516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.999 [2024-07-24 19:28:56.045525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.999 [2024-07-24 19:28:56.045543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.999 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.055427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.055506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.999 [2024-07-24 19:28:56.055523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.999 [2024-07-24 19:28:56.055533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.999 [2024-07-24 19:28:56.055542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.999 [2024-07-24 19:28:56.055560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.999 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.065464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.065538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.999 [2024-07-24 19:28:56.065555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.999 [2024-07-24 19:28:56.065565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.999 [2024-07-24 19:28:56.065574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.999 [2024-07-24 19:28:56.065592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.999 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.075479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.075555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.999 [2024-07-24 19:28:56.075572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.999 [2024-07-24 19:28:56.075582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.999 [2024-07-24 19:28:56.075591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.999 [2024-07-24 19:28:56.075609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.999 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.085516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.085595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.999 [2024-07-24 19:28:56.085612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.999 [2024-07-24 19:28:56.085623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.999 [2024-07-24 19:28:56.085632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.999 [2024-07-24 19:28:56.085650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.999 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.095544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.095621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.999 [2024-07-24 19:28:56.095638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.999 [2024-07-24 19:28:56.095648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.999 [2024-07-24 19:28:56.095657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.999 [2024-07-24 19:28:56.095675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.999 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.105566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.105644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.999 [2024-07-24 19:28:56.105661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.999 [2024-07-24 19:28:56.105671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.999 [2024-07-24 19:28:56.105680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.999 [2024-07-24 19:28:56.105697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.999 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.115589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.115666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.999 [2024-07-24 19:28:56.115684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.999 [2024-07-24 19:28:56.115694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.999 [2024-07-24 19:28:56.115702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.999 [2024-07-24 19:28:56.115724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.999 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.125665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.125750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.999 [2024-07-24 19:28:56.125767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.999 [2024-07-24 19:28:56.125777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.999 [2024-07-24 19:28:56.125788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.999 [2024-07-24 19:28:56.125806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.999 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.135655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.135736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.999 [2024-07-24 19:28:56.135753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.999 [2024-07-24 19:28:56.135763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.999 [2024-07-24 19:28:56.135772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.999 [2024-07-24 19:28:56.135790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.999 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.145749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.145860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.999 [2024-07-24 19:28:56.145877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.999 [2024-07-24 19:28:56.145887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.999 [2024-07-24 19:28:56.145896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:09.999 [2024-07-24 19:28:56.145914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.999 qpair failed and we were unable to recover it. 00:28:09.999 [2024-07-24 19:28:56.155704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.999 [2024-07-24 19:28:56.155792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.000 [2024-07-24 19:28:56.155809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.000 [2024-07-24 19:28:56.155819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.000 [2024-07-24 19:28:56.155827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.000 [2024-07-24 19:28:56.155845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.000 qpair failed and we were unable to recover it. 00:28:10.000 [2024-07-24 19:28:56.165757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.000 [2024-07-24 19:28:56.165837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.000 [2024-07-24 19:28:56.165855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.000 [2024-07-24 19:28:56.165865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.000 [2024-07-24 19:28:56.165873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.000 [2024-07-24 19:28:56.165890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.000 qpair failed and we were unable to recover it. 00:28:10.000 [2024-07-24 19:28:56.175920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.000 [2024-07-24 19:28:56.176013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.000 [2024-07-24 19:28:56.176031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.000 [2024-07-24 19:28:56.176041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.000 [2024-07-24 19:28:56.176049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.000 [2024-07-24 19:28:56.176067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.000 qpair failed and we were unable to recover it. 00:28:10.000 [2024-07-24 19:28:56.185810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.000 [2024-07-24 19:28:56.185892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.000 [2024-07-24 19:28:56.185909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.000 [2024-07-24 19:28:56.185919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.000 [2024-07-24 19:28:56.185928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.000 [2024-07-24 19:28:56.185946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.000 qpair failed and we were unable to recover it. 00:28:10.000 [2024-07-24 19:28:56.195841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.000 [2024-07-24 19:28:56.195920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.000 [2024-07-24 19:28:56.195938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.000 [2024-07-24 19:28:56.195948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.000 [2024-07-24 19:28:56.195957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.000 [2024-07-24 19:28:56.195974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.000 qpair failed and we were unable to recover it. 00:28:10.000 [2024-07-24 19:28:56.205867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.000 [2024-07-24 19:28:56.205943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.000 [2024-07-24 19:28:56.205961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.000 [2024-07-24 19:28:56.205971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.000 [2024-07-24 19:28:56.205980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.000 [2024-07-24 19:28:56.205997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.000 qpair failed and we were unable to recover it. 00:28:10.000 [2024-07-24 19:28:56.215922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.000 [2024-07-24 19:28:56.215996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.000 [2024-07-24 19:28:56.216014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.000 [2024-07-24 19:28:56.216027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.000 [2024-07-24 19:28:56.216036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.000 [2024-07-24 19:28:56.216054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.000 qpair failed and we were unable to recover it. 00:28:10.000 [2024-07-24 19:28:56.225934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.000 [2024-07-24 19:28:56.226012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.000 [2024-07-24 19:28:56.226029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.000 [2024-07-24 19:28:56.226039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.000 [2024-07-24 19:28:56.226048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.000 [2024-07-24 19:28:56.226066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.000 qpair failed and we were unable to recover it. 00:28:10.000 [2024-07-24 19:28:56.235968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.000 [2024-07-24 19:28:56.236042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.000 [2024-07-24 19:28:56.236059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.000 [2024-07-24 19:28:56.236069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.000 [2024-07-24 19:28:56.236078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.261 [2024-07-24 19:28:56.236095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.261 qpair failed and we were unable to recover it. 00:28:10.261 [2024-07-24 19:28:56.246002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.261 [2024-07-24 19:28:56.246122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.261 [2024-07-24 19:28:56.246139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.261 [2024-07-24 19:28:56.246150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.261 [2024-07-24 19:28:56.246158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.261 [2024-07-24 19:28:56.246177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.261 qpair failed and we were unable to recover it. 00:28:10.261 [2024-07-24 19:28:56.256010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.261 [2024-07-24 19:28:56.256087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.261 [2024-07-24 19:28:56.256105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.261 [2024-07-24 19:28:56.256115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.261 [2024-07-24 19:28:56.256123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.261 [2024-07-24 19:28:56.256141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.261 qpair failed and we were unable to recover it. 00:28:10.261 [2024-07-24 19:28:56.266065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.261 [2024-07-24 19:28:56.266181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.261 [2024-07-24 19:28:56.266199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.261 [2024-07-24 19:28:56.266208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.261 [2024-07-24 19:28:56.266217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.261 [2024-07-24 19:28:56.266235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.261 qpair failed and we were unable to recover it. 00:28:10.261 [2024-07-24 19:28:56.276068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.261 [2024-07-24 19:28:56.276146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.261 [2024-07-24 19:28:56.276164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.261 [2024-07-24 19:28:56.276174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.261 [2024-07-24 19:28:56.276183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.261 [2024-07-24 19:28:56.276200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.261 qpair failed and we were unable to recover it. 00:28:10.261 [2024-07-24 19:28:56.286099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.261 [2024-07-24 19:28:56.286181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.261 [2024-07-24 19:28:56.286198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.261 [2024-07-24 19:28:56.286208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.261 [2024-07-24 19:28:56.286217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.261 [2024-07-24 19:28:56.286234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.261 qpair failed and we were unable to recover it. 00:28:10.261 [2024-07-24 19:28:56.296120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.262 [2024-07-24 19:28:56.296196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.262 [2024-07-24 19:28:56.296213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.262 [2024-07-24 19:28:56.296223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.262 [2024-07-24 19:28:56.296232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.262 [2024-07-24 19:28:56.296249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.262 qpair failed and we were unable to recover it. 00:28:10.262 [2024-07-24 19:28:56.306157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.262 [2024-07-24 19:28:56.306237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.262 [2024-07-24 19:28:56.306254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.262 [2024-07-24 19:28:56.306267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.262 [2024-07-24 19:28:56.306275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.262 [2024-07-24 19:28:56.306293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.262 qpair failed and we were unable to recover it. 00:28:10.262 [2024-07-24 19:28:56.316190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.262 [2024-07-24 19:28:56.316295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.262 [2024-07-24 19:28:56.316312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.262 [2024-07-24 19:28:56.316322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.262 [2024-07-24 19:28:56.316331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.262 [2024-07-24 19:28:56.316349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.262 qpair failed and we were unable to recover it. 00:28:10.262 [2024-07-24 19:28:56.326219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.262 [2024-07-24 19:28:56.326292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.262 [2024-07-24 19:28:56.326310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.262 [2024-07-24 19:28:56.326320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.262 [2024-07-24 19:28:56.326329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.262 [2024-07-24 19:28:56.326347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.262 qpair failed and we were unable to recover it. 00:28:10.262 [2024-07-24 19:28:56.336226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.262 [2024-07-24 19:28:56.336306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.262 [2024-07-24 19:28:56.336323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.262 [2024-07-24 19:28:56.336333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.262 [2024-07-24 19:28:56.336342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.262 [2024-07-24 19:28:56.336360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.262 qpair failed and we were unable to recover it. 00:28:10.262 [2024-07-24 19:28:56.346247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.262 [2024-07-24 19:28:56.346327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.262 [2024-07-24 19:28:56.346345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.262 [2024-07-24 19:28:56.346354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.262 [2024-07-24 19:28:56.346363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.262 [2024-07-24 19:28:56.346381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.262 qpair failed and we were unable to recover it. 00:28:10.262 [2024-07-24 19:28:56.356305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.262 [2024-07-24 19:28:56.356383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.262 [2024-07-24 19:28:56.356401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.262 [2024-07-24 19:28:56.356411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.262 [2024-07-24 19:28:56.356419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.262 [2024-07-24 19:28:56.356438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.262 qpair failed and we were unable to recover it. 00:28:10.262 [2024-07-24 19:28:56.366299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.262 [2024-07-24 19:28:56.366375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.262 [2024-07-24 19:28:56.366393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.262 [2024-07-24 19:28:56.366403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.262 [2024-07-24 19:28:56.366412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.262 [2024-07-24 19:28:56.366430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.262 qpair failed and we were unable to recover it. 00:28:10.262 [2024-07-24 19:28:56.376402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.262 [2024-07-24 19:28:56.376494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.262 [2024-07-24 19:28:56.376511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.262 [2024-07-24 19:28:56.376521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.262 [2024-07-24 19:28:56.376529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.262 [2024-07-24 19:28:56.376547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.262 qpair failed and we were unable to recover it. 00:28:10.262 [2024-07-24 19:28:56.386431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.262 [2024-07-24 19:28:56.386510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.262 [2024-07-24 19:28:56.386528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.262 [2024-07-24 19:28:56.386538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.262 [2024-07-24 19:28:56.386547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.262 [2024-07-24 19:28:56.386566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.262 qpair failed and we were unable to recover it. 00:28:10.262 [2024-07-24 19:28:56.396355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.262 [2024-07-24 19:28:56.396434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.262 [2024-07-24 19:28:56.396455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.262 [2024-07-24 19:28:56.396465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.262 [2024-07-24 19:28:56.396474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.262 [2024-07-24 19:28:56.396492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.262 qpair failed and we were unable to recover it. 00:28:10.262 [2024-07-24 19:28:56.406438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.262 [2024-07-24 19:28:56.406525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.262 [2024-07-24 19:28:56.406543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.262 [2024-07-24 19:28:56.406554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.262 [2024-07-24 19:28:56.406562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.262 [2024-07-24 19:28:56.406580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.262 qpair failed and we were unable to recover it. 00:28:10.262 [2024-07-24 19:28:56.416464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.263 [2024-07-24 19:28:56.416548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.263 [2024-07-24 19:28:56.416565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.263 [2024-07-24 19:28:56.416575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.263 [2024-07-24 19:28:56.416584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.263 [2024-07-24 19:28:56.416602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.263 qpair failed and we were unable to recover it. 00:28:10.263 [2024-07-24 19:28:56.426532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.263 [2024-07-24 19:28:56.426621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.263 [2024-07-24 19:28:56.426638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.263 [2024-07-24 19:28:56.426648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.263 [2024-07-24 19:28:56.426656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.263 [2024-07-24 19:28:56.426674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.263 qpair failed and we were unable to recover it. 00:28:10.263 [2024-07-24 19:28:56.436539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.263 [2024-07-24 19:28:56.436616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.263 [2024-07-24 19:28:56.436633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.263 [2024-07-24 19:28:56.436643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.263 [2024-07-24 19:28:56.436652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.263 [2024-07-24 19:28:56.436674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.263 qpair failed and we were unable to recover it. 00:28:10.263 [2024-07-24 19:28:56.446565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.263 [2024-07-24 19:28:56.446638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.263 [2024-07-24 19:28:56.446656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.263 [2024-07-24 19:28:56.446665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.263 [2024-07-24 19:28:56.446674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.263 [2024-07-24 19:28:56.446692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.263 qpair failed and we were unable to recover it. 00:28:10.263 [2024-07-24 19:28:56.456663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.263 [2024-07-24 19:28:56.456755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.263 [2024-07-24 19:28:56.456773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.263 [2024-07-24 19:28:56.456782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.263 [2024-07-24 19:28:56.456791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.263 [2024-07-24 19:28:56.456809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.263 qpair failed and we were unable to recover it. 00:28:10.263 [2024-07-24 19:28:56.466672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.263 [2024-07-24 19:28:56.466789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.263 [2024-07-24 19:28:56.466807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.263 [2024-07-24 19:28:56.466817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.263 [2024-07-24 19:28:56.466825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.263 [2024-07-24 19:28:56.466843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.263 qpair failed and we were unable to recover it. 00:28:10.263 [2024-07-24 19:28:56.476728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.263 [2024-07-24 19:28:56.476822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.263 [2024-07-24 19:28:56.476842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.263 [2024-07-24 19:28:56.476852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.263 [2024-07-24 19:28:56.476861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.263 [2024-07-24 19:28:56.476880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.263 qpair failed and we were unable to recover it. 00:28:10.263 [2024-07-24 19:28:56.486672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.263 [2024-07-24 19:28:56.486753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.263 [2024-07-24 19:28:56.486774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.263 [2024-07-24 19:28:56.486784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.263 [2024-07-24 19:28:56.486792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.263 [2024-07-24 19:28:56.486810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.263 qpair failed and we were unable to recover it. 00:28:10.263 [2024-07-24 19:28:56.496721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.263 [2024-07-24 19:28:56.496807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.263 [2024-07-24 19:28:56.496825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.263 [2024-07-24 19:28:56.496834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.263 [2024-07-24 19:28:56.496843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.263 [2024-07-24 19:28:56.496861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.263 qpair failed and we were unable to recover it. 00:28:10.525 [2024-07-24 19:28:56.506736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.525 [2024-07-24 19:28:56.506844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.525 [2024-07-24 19:28:56.506862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.525 [2024-07-24 19:28:56.506873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.525 [2024-07-24 19:28:56.506882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.525 [2024-07-24 19:28:56.506901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.525 qpair failed and we were unable to recover it. 00:28:10.525 [2024-07-24 19:28:56.516726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.525 [2024-07-24 19:28:56.516809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.525 [2024-07-24 19:28:56.516827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.525 [2024-07-24 19:28:56.516837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.525 [2024-07-24 19:28:56.516846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.525 [2024-07-24 19:28:56.516865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.525 qpair failed and we were unable to recover it. 00:28:10.525 [2024-07-24 19:28:56.526750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.525 [2024-07-24 19:28:56.526899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.525 [2024-07-24 19:28:56.526917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.525 [2024-07-24 19:28:56.526927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.525 [2024-07-24 19:28:56.526940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.525 [2024-07-24 19:28:56.526958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.525 qpair failed and we were unable to recover it. 00:28:10.525 [2024-07-24 19:28:56.536921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.525 [2024-07-24 19:28:56.537079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.525 [2024-07-24 19:28:56.537095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.525 [2024-07-24 19:28:56.537105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.525 [2024-07-24 19:28:56.537114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.525 [2024-07-24 19:28:56.537132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.525 qpair failed and we were unable to recover it. 00:28:10.525 [2024-07-24 19:28:56.546788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.525 [2024-07-24 19:28:56.546863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.525 [2024-07-24 19:28:56.546880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.525 [2024-07-24 19:28:56.546890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.525 [2024-07-24 19:28:56.546899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.525 [2024-07-24 19:28:56.546916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.525 qpair failed and we were unable to recover it. 00:28:10.525 [2024-07-24 19:28:56.556888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.525 [2024-07-24 19:28:56.556968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.525 [2024-07-24 19:28:56.556986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.525 [2024-07-24 19:28:56.556997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.525 [2024-07-24 19:28:56.557005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.525 [2024-07-24 19:28:56.557023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.525 qpair failed and we were unable to recover it. 00:28:10.525 [2024-07-24 19:28:56.566834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.525 [2024-07-24 19:28:56.566914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.525 [2024-07-24 19:28:56.566932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.525 [2024-07-24 19:28:56.566942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.525 [2024-07-24 19:28:56.566951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.525 [2024-07-24 19:28:56.566969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.525 qpair failed and we were unable to recover it. 00:28:10.525 [2024-07-24 19:28:56.576925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.525 [2024-07-24 19:28:56.577001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.525 [2024-07-24 19:28:56.577018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.525 [2024-07-24 19:28:56.577028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.525 [2024-07-24 19:28:56.577036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.525 [2024-07-24 19:28:56.577054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.525 qpair failed and we were unable to recover it. 00:28:10.525 [2024-07-24 19:28:56.586964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.525 [2024-07-24 19:28:56.587040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.525 [2024-07-24 19:28:56.587057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.525 [2024-07-24 19:28:56.587067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.525 [2024-07-24 19:28:56.587076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.525 [2024-07-24 19:28:56.587093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.525 qpair failed and we were unable to recover it. 00:28:10.525 [2024-07-24 19:28:56.596989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.525 [2024-07-24 19:28:56.597066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.525 [2024-07-24 19:28:56.597084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.525 [2024-07-24 19:28:56.597094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.525 [2024-07-24 19:28:56.597102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.525 [2024-07-24 19:28:56.597121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.525 qpair failed and we were unable to recover it. 00:28:10.525 [2024-07-24 19:28:56.607044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.525 [2024-07-24 19:28:56.607124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.525 [2024-07-24 19:28:56.607141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.525 [2024-07-24 19:28:56.607151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.525 [2024-07-24 19:28:56.607160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.607177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.617014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.526 [2024-07-24 19:28:56.617098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.526 [2024-07-24 19:28:56.617115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.526 [2024-07-24 19:28:56.617128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.526 [2024-07-24 19:28:56.617137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.617155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.627078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.526 [2024-07-24 19:28:56.627199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.526 [2024-07-24 19:28:56.627217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.526 [2024-07-24 19:28:56.627227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.526 [2024-07-24 19:28:56.627235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.627253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.637107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.526 [2024-07-24 19:28:56.637185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.526 [2024-07-24 19:28:56.637202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.526 [2024-07-24 19:28:56.637212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.526 [2024-07-24 19:28:56.637221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.637239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.647179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.526 [2024-07-24 19:28:56.647294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.526 [2024-07-24 19:28:56.647311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.526 [2024-07-24 19:28:56.647321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.526 [2024-07-24 19:28:56.647329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.647347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.657171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.526 [2024-07-24 19:28:56.657243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.526 [2024-07-24 19:28:56.657260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.526 [2024-07-24 19:28:56.657270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.526 [2024-07-24 19:28:56.657279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.657296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.667233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.526 [2024-07-24 19:28:56.667307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.526 [2024-07-24 19:28:56.667324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.526 [2024-07-24 19:28:56.667334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.526 [2024-07-24 19:28:56.667343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.667361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.677220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.526 [2024-07-24 19:28:56.677304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.526 [2024-07-24 19:28:56.677321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.526 [2024-07-24 19:28:56.677331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.526 [2024-07-24 19:28:56.677340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.677357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.687245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.526 [2024-07-24 19:28:56.687324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.526 [2024-07-24 19:28:56.687342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.526 [2024-07-24 19:28:56.687352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.526 [2024-07-24 19:28:56.687361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.687379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.697268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.526 [2024-07-24 19:28:56.697351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.526 [2024-07-24 19:28:56.697369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.526 [2024-07-24 19:28:56.697380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.526 [2024-07-24 19:28:56.697389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.697406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.707306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.526 [2024-07-24 19:28:56.707391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.526 [2024-07-24 19:28:56.707408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.526 [2024-07-24 19:28:56.707421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.526 [2024-07-24 19:28:56.707430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.707448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.717347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.526 [2024-07-24 19:28:56.717456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.526 [2024-07-24 19:28:56.717473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.526 [2024-07-24 19:28:56.717483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.526 [2024-07-24 19:28:56.717492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.717510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.727360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.526 [2024-07-24 19:28:56.727482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.526 [2024-07-24 19:28:56.727499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.526 [2024-07-24 19:28:56.727509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.526 [2024-07-24 19:28:56.727518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.526 [2024-07-24 19:28:56.727536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.526 qpair failed and we were unable to recover it. 00:28:10.526 [2024-07-24 19:28:56.737377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.527 [2024-07-24 19:28:56.737456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.527 [2024-07-24 19:28:56.737473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.527 [2024-07-24 19:28:56.737483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.527 [2024-07-24 19:28:56.737492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.527 [2024-07-24 19:28:56.737510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.527 qpair failed and we were unable to recover it. 00:28:10.527 [2024-07-24 19:28:56.747423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.527 [2024-07-24 19:28:56.747502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.527 [2024-07-24 19:28:56.747520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.527 [2024-07-24 19:28:56.747530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.527 [2024-07-24 19:28:56.747538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.527 [2024-07-24 19:28:56.747556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.527 qpair failed and we were unable to recover it. 00:28:10.527 [2024-07-24 19:28:56.757428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.527 [2024-07-24 19:28:56.757500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.527 [2024-07-24 19:28:56.757517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.527 [2024-07-24 19:28:56.757527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.527 [2024-07-24 19:28:56.757535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.527 [2024-07-24 19:28:56.757553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.527 qpair failed and we were unable to recover it. 00:28:10.787 [2024-07-24 19:28:56.767489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.787 [2024-07-24 19:28:56.767565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.787 [2024-07-24 19:28:56.767582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.787 [2024-07-24 19:28:56.767592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.787 [2024-07-24 19:28:56.767601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.787 [2024-07-24 19:28:56.767619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.787 qpair failed and we were unable to recover it. 00:28:10.787 [2024-07-24 19:28:56.777532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.787 [2024-07-24 19:28:56.777613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.787 [2024-07-24 19:28:56.777630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.787 [2024-07-24 19:28:56.777640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.787 [2024-07-24 19:28:56.777648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.787 [2024-07-24 19:28:56.777666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.787 qpair failed and we were unable to recover it. 00:28:10.787 [2024-07-24 19:28:56.787540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.787 [2024-07-24 19:28:56.787631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.787 [2024-07-24 19:28:56.787647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.787 [2024-07-24 19:28:56.787657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.787 [2024-07-24 19:28:56.787666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.787 [2024-07-24 19:28:56.787683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.787 qpair failed and we were unable to recover it. 00:28:10.787 [2024-07-24 19:28:56.797580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.787 [2024-07-24 19:28:56.797656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.787 [2024-07-24 19:28:56.797677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.787 [2024-07-24 19:28:56.797687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.787 [2024-07-24 19:28:56.797695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.787 [2024-07-24 19:28:56.797713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.787 qpair failed and we were unable to recover it. 00:28:10.787 [2024-07-24 19:28:56.807610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.787 [2024-07-24 19:28:56.807683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.787 [2024-07-24 19:28:56.807701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.787 [2024-07-24 19:28:56.807711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.787 [2024-07-24 19:28:56.807723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.787 [2024-07-24 19:28:56.807741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.787 qpair failed and we were unable to recover it. 00:28:10.787 [2024-07-24 19:28:56.817626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.787 [2024-07-24 19:28:56.817702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.787 [2024-07-24 19:28:56.817724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.787 [2024-07-24 19:28:56.817733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.788 [2024-07-24 19:28:56.817741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.788 [2024-07-24 19:28:56.817760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.788 qpair failed and we were unable to recover it. 00:28:10.788 [2024-07-24 19:28:56.827700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.788 [2024-07-24 19:28:56.827777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.788 [2024-07-24 19:28:56.827795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.788 [2024-07-24 19:28:56.827805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.788 [2024-07-24 19:28:56.827813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.788 [2024-07-24 19:28:56.827831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.788 qpair failed and we were unable to recover it. 00:28:10.788 [2024-07-24 19:28:56.837698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.788 [2024-07-24 19:28:56.837775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.788 [2024-07-24 19:28:56.837792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.788 [2024-07-24 19:28:56.837801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.788 [2024-07-24 19:28:56.837810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.788 [2024-07-24 19:28:56.837831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.788 qpair failed and we were unable to recover it. 00:28:10.788 [2024-07-24 19:28:56.847762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.788 [2024-07-24 19:28:56.847885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.788 [2024-07-24 19:28:56.847903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.788 [2024-07-24 19:28:56.847912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.788 [2024-07-24 19:28:56.847921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.788 [2024-07-24 19:28:56.847939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.788 qpair failed and we were unable to recover it. 00:28:10.788 [2024-07-24 19:28:56.857683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.788 [2024-07-24 19:28:56.857766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.788 [2024-07-24 19:28:56.857784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.788 [2024-07-24 19:28:56.857794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.788 [2024-07-24 19:28:56.857803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd54c000b90 00:28:10.788 [2024-07-24 19:28:56.857820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.788 qpair failed and we were unable to recover it. 00:28:10.788 [2024-07-24 19:28:56.867858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.788 [2024-07-24 19:28:56.867958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.788 [2024-07-24 19:28:56.867987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.788 [2024-07-24 19:28:56.868001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.788 [2024-07-24 19:28:56.868014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd544000b90 00:28:10.788 [2024-07-24 19:28:56.868042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.788 qpair failed and we were unable to recover it. 00:28:10.788 [2024-07-24 19:28:56.877808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.788 [2024-07-24 19:28:56.877919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.788 [2024-07-24 19:28:56.877937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.788 [2024-07-24 19:28:56.877947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.788 [2024-07-24 19:28:56.877956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd544000b90 00:28:10.788 [2024-07-24 19:28:56.877975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.788 qpair failed and we were unable to recover it. 00:28:10.788 [2024-07-24 19:28:56.887858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.788 [2024-07-24 19:28:56.888016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.788 [2024-07-24 19:28:56.888049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.788 [2024-07-24 19:28:56.888064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.788 [2024-07-24 19:28:56.888077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd554000b90 00:28:10.788 [2024-07-24 19:28:56.888104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:10.788 qpair failed and we were unable to recover it. 00:28:10.788 [2024-07-24 19:28:56.897897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.788 [2024-07-24 19:28:56.897977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.788 [2024-07-24 19:28:56.897996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.788 [2024-07-24 19:28:56.898006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.788 [2024-07-24 19:28:56.898015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd554000b90 00:28:10.788 [2024-07-24 19:28:56.898034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:10.788 qpair failed and we were unable to recover it. 00:28:10.788 [2024-07-24 19:28:56.898130] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:10.788 A controller has encountered a failure and is being reset. 00:28:10.788 [2024-07-24 19:28:56.907908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.788 [2024-07-24 19:28:56.908004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.788 [2024-07-24 19:28:56.908034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.788 [2024-07-24 19:28:56.908049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.788 [2024-07-24 19:28:56.908061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce41a0 00:28:10.788 [2024-07-24 19:28:56.908087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.788 qpair failed and we were unable to recover it. 00:28:10.788 [2024-07-24 19:28:56.917916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.788 [2024-07-24 19:28:56.917993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.788 [2024-07-24 19:28:56.918012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.788 [2024-07-24 19:28:56.918023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.788 [2024-07-24 19:28:56.918032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce41a0 00:28:10.788 [2024-07-24 19:28:56.918050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.788 qpair failed and we were unable to recover it. 00:28:10.788 [2024-07-24 19:28:56.918161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf2210 (9): Bad file descriptor 00:28:11.048 Controller properly reset. 00:28:11.048 Initializing NVMe Controllers 00:28:11.048 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:11.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:11.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:11.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:11.048 Initialization complete. Launching workers. 00:28:11.048 Starting thread on core 1 00:28:11.048 Starting thread on core 2 00:28:11.048 Starting thread on core 3 00:28:11.048 Starting thread on core 0 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:11.048 00:28:11.048 real 0m11.455s 00:28:11.048 user 0m20.686s 00:28:11.048 sys 0m4.997s 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.048 ************************************ 00:28:11.048 END TEST nvmf_target_disconnect_tc2 00:28:11.048 ************************************ 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:11.048 rmmod nvme_tcp 00:28:11.048 rmmod nvme_fabrics 00:28:11.048 rmmod nvme_keyring 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1688953 ']' 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1688953 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1688953 ']' 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1688953 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:28:11.048 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:11.049 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1688953 00:28:11.049 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:28:11.049 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:28:11.049 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1688953' 00:28:11.049 killing process with pid 1688953 00:28:11.049 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1688953 00:28:11.049 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1688953 00:28:11.308 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:11.308 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:11.308 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:11.308 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:11.308 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:11.308 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.308 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.308 19:28:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.843 19:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:13.843 00:28:13.843 real 0m21.235s 00:28:13.843 user 0m48.823s 00:28:13.843 sys 0m10.750s 00:28:13.843 19:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:13.843 19:28:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:13.843 ************************************ 00:28:13.843 END TEST nvmf_target_disconnect 00:28:13.843 ************************************ 00:28:13.843 19:28:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:13.843 00:28:13.843 real 6m14.197s 00:28:13.843 user 10m57.527s 00:28:13.843 sys 2m18.374s 00:28:13.843 19:28:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:13.843 19:28:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.843 ************************************ 00:28:13.843 END TEST nvmf_host 00:28:13.843 ************************************ 00:28:13.843 00:28:13.843 real 22m19.704s 00:28:13.843 user 45m34.688s 00:28:13.843 sys 8m18.248s 00:28:13.843 19:28:59 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:13.843 19:28:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.843 ************************************ 00:28:13.843 END TEST nvmf_tcp 00:28:13.843 ************************************ 00:28:13.843 19:28:59 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:28:13.843 19:28:59 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:13.843 19:28:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:13.843 19:28:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:13.843 19:28:59 -- common/autotest_common.sh@10 -- # set +x 00:28:13.843 ************************************ 00:28:13.843 START TEST spdkcli_nvmf_tcp 00:28:13.843 ************************************ 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:13.843 * Looking for test storage... 00:28:13.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.843 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1690678 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1690678 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1690678 ']' 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:13.844 19:28:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.844 [2024-07-24 19:28:59.883952] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:28:13.844 [2024-07-24 19:28:59.884005] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1690678 ] 00:28:13.844 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.844 [2024-07-24 19:28:59.948836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:13.844 [2024-07-24 19:29:00.029825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.844 [2024-07-24 19:29:00.029828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.789 19:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:14.789 19:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:28:14.789 19:29:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:14.789 19:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:14.789 19:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.789 19:29:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:14.789 19:29:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:14.789 19:29:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:14.789 19:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:14.789 19:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.789 19:29:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:14.789 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:14.789 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:14.789 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:14.790 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:14.790 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:14.790 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:14.790 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:14.790 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:14.790 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:14.790 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:14.790 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:14.790 ' 00:28:17.344 [2024-07-24 19:29:03.126600] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.278 [2024-07-24 19:29:04.302491] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:20.811 [2024-07-24 19:29:06.464961] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:22.189 [2024-07-24 19:29:08.322662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:23.565 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:23.565 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:23.565 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:23.565 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:23.565 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:23.565 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:23.565 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:23.565 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:23.565 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:23.565 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:23.565 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:23.565 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:23.824 19:29:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:23.824 19:29:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:23.824 19:29:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.824 19:29:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:23.824 19:29:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:23.824 19:29:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.824 19:29:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:28:23.824 19:29:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:24.083 19:29:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:24.083 19:29:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:24.083 19:29:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:24.083 19:29:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.083 19:29:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:24.341 19:29:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:24.341 19:29:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.341 19:29:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:24.342 19:29:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:24.342 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:24.342 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:24.342 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:24.342 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:24.342 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:24.342 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:24.342 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:24.342 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:24.342 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:24.342 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:24.342 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:24.342 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:24.342 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:24.342 ' 00:28:29.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:29.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:29.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:29.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:29.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:29.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:29.614 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:29.614 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:29.614 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:29.614 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:29.614 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:29.614 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:29.614 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:29.614 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1690678 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1690678 ']' 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1690678 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1690678 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1690678' 00:28:29.614 killing process with pid 1690678 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1690678 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1690678 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1690678 ']' 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1690678 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1690678 ']' 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1690678 00:28:29.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1690678) - No such process 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1690678 is not found' 00:28:29.614 Process with pid 1690678 is not found 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:29.614 00:28:29.614 real 0m15.892s 00:28:29.614 user 0m32.820s 00:28:29.614 sys 0m0.869s 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:29.614 19:29:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:29.614 ************************************ 00:28:29.614 END TEST spdkcli_nvmf_tcp 00:28:29.614 ************************************ 00:28:29.614 19:29:15 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:29.614 19:29:15 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:29.614 19:29:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:29.614 19:29:15 -- common/autotest_common.sh@10 -- # set +x 00:28:29.614 ************************************ 00:28:29.614 START TEST nvmf_identify_passthru 00:28:29.614 ************************************ 00:28:29.614 19:29:15 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:29.614 * Looking for test storage... 00:28:29.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:29.614 19:29:15 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.614 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:28:29.614 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.615 19:29:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.615 19:29:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.615 19:29:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.615 19:29:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.615 19:29:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.615 19:29:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.615 19:29:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:29.615 19:29:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:29.615 19:29:15 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.615 19:29:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.615 19:29:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.615 19:29:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.615 19:29:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.615 19:29:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.615 19:29:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.615 19:29:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:29.615 19:29:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.615 19:29:15 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.615 19:29:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:29.615 19:29:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:29.615 19:29:15 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:28:29.615 19:29:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:36.180 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:36.180 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:36.180 Found net devices under 0000:af:00.0: cvl_0_0 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:36.180 Found net devices under 0000:af:00.1: cvl_0_1 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.180 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:36.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:28:36.438 00:28:36.438 --- 10.0.0.2 ping statistics --- 00:28:36.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.438 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:28:36.438 00:28:36.438 --- 10.0.0.1 ping statistics --- 00:28:36.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.438 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:36.438 19:29:22 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:36.438 19:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:36.438 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:36.438 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:36.438 19:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:36.438 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:28:36.438 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:28:36.438 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:28:36.438 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:28:36.438 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:36.438 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:28:36.438 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:36.438 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:36.438 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:36.695 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:36.695 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:28:36.695 19:29:22 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:d8:00.0 00:28:36.695 19:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:28:36.695 19:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:28:36.695 19:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:28:36.695 19:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:36.695 19:29:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:36.695 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.974 19:29:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:28:41.974 19:29:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:28:41.974 19:29:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:41.974 19:29:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:41.974 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.227 19:29:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:46.227 19:29:32 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:46.227 19:29:32 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:46.227 19:29:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.227 19:29:32 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:46.227 19:29:32 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.227 19:29:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.227 19:29:32 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1698108 00:28:46.227 19:29:32 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:46.227 19:29:32 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:46.227 19:29:32 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1698108 00:28:46.227 19:29:32 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1698108 ']' 00:28:46.227 19:29:32 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.227 19:29:32 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:46.227 19:29:32 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.227 19:29:32 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:46.227 19:29:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.227 [2024-07-24 19:29:32.299260] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:28:46.227 [2024-07-24 19:29:32.299310] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.227 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.227 [2024-07-24 19:29:32.371804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.227 [2024-07-24 19:29:32.444484] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.227 [2024-07-24 19:29:32.444524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.227 [2024-07-24 19:29:32.444533] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.227 [2024-07-24 19:29:32.444541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.227 [2024-07-24 19:29:32.444548] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.227 [2024-07-24 19:29:32.444589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.227 [2024-07-24 19:29:32.444686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.227 [2024-07-24 19:29:32.444784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.227 [2024-07-24 19:29:32.444786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:28:47.164 19:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:47.164 INFO: Log level set to 20 00:28:47.164 INFO: Requests: 00:28:47.164 { 00:28:47.164 "jsonrpc": "2.0", 00:28:47.164 "method": "nvmf_set_config", 00:28:47.164 "id": 1, 00:28:47.164 "params": { 00:28:47.164 "admin_cmd_passthru": { 00:28:47.164 "identify_ctrlr": true 00:28:47.164 } 00:28:47.164 } 00:28:47.164 } 00:28:47.164 00:28:47.164 INFO: response: 00:28:47.164 { 00:28:47.164 "jsonrpc": "2.0", 00:28:47.164 "id": 1, 00:28:47.164 "result": true 00:28:47.164 } 00:28:47.164 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.164 19:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:47.164 INFO: Setting log level to 20 00:28:47.164 INFO: Setting log level to 20 00:28:47.164 INFO: Log level set to 20 00:28:47.164 INFO: Log level set to 20 00:28:47.164 INFO: Requests: 00:28:47.164 { 00:28:47.164 "jsonrpc": "2.0", 00:28:47.164 "method": "framework_start_init", 00:28:47.164 "id": 1 00:28:47.164 } 00:28:47.164 00:28:47.164 INFO: Requests: 00:28:47.164 { 00:28:47.164 "jsonrpc": "2.0", 00:28:47.164 "method": "framework_start_init", 00:28:47.164 "id": 1 00:28:47.164 } 00:28:47.164 00:28:47.164 [2024-07-24 19:29:33.199208] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:47.164 INFO: response: 00:28:47.164 { 00:28:47.164 "jsonrpc": "2.0", 00:28:47.164 "id": 1, 00:28:47.164 "result": true 00:28:47.164 } 00:28:47.164 00:28:47.164 INFO: response: 00:28:47.164 { 00:28:47.164 "jsonrpc": "2.0", 00:28:47.164 "id": 1, 00:28:47.164 "result": true 00:28:47.164 } 00:28:47.164 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.164 19:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:47.164 INFO: Setting log level to 40 00:28:47.164 INFO: Setting log level to 40 00:28:47.164 INFO: Setting log level to 40 00:28:47.164 [2024-07-24 19:29:33.212615] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.164 19:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:47.164 19:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.164 19:29:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:50.455 Nvme0n1 00:28:50.455 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.455 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:50.455 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.455 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:50.455 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.455 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:50.455 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.455 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:50.455 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.455 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:50.455 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.455 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:50.455 [2024-07-24 19:29:36.145604] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.455 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.455 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:50.455 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.455 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:50.456 [ 00:28:50.456 { 00:28:50.456 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:50.456 "subtype": "Discovery", 00:28:50.456 "listen_addresses": [], 00:28:50.456 "allow_any_host": true, 00:28:50.456 "hosts": [] 00:28:50.456 }, 00:28:50.456 { 00:28:50.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:50.456 "subtype": "NVMe", 00:28:50.456 "listen_addresses": [ 00:28:50.456 { 00:28:50.456 "trtype": "TCP", 00:28:50.456 "adrfam": "IPv4", 00:28:50.456 "traddr": "10.0.0.2", 00:28:50.456 "trsvcid": "4420" 00:28:50.456 } 00:28:50.456 ], 00:28:50.456 "allow_any_host": true, 00:28:50.456 "hosts": [], 00:28:50.456 "serial_number": "SPDK00000000000001", 00:28:50.456 "model_number": "SPDK bdev Controller", 00:28:50.456 "max_namespaces": 1, 00:28:50.456 "min_cntlid": 1, 00:28:50.456 "max_cntlid": 65519, 00:28:50.456 "namespaces": [ 00:28:50.456 { 00:28:50.456 "nsid": 1, 00:28:50.456 "bdev_name": "Nvme0n1", 00:28:50.456 "name": "Nvme0n1", 00:28:50.456 "nguid": "6A1DA8B9A1004B0E92DFC5DB17E14B86", 00:28:50.456 "uuid": "6a1da8b9-a100-4b0e-92df-c5db17e14b86" 00:28:50.456 } 00:28:50.456 ] 00:28:50.456 } 00:28:50.456 ] 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:50.456 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:50.456 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:50.456 19:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:50.456 19:29:36 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:50.456 19:29:36 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:50.456 19:29:36 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:50.456 19:29:36 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:50.456 19:29:36 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:50.456 19:29:36 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:50.456 rmmod nvme_tcp 00:28:50.456 rmmod nvme_fabrics 00:28:50.456 rmmod nvme_keyring 00:28:50.456 19:29:36 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:50.456 19:29:36 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:50.456 19:29:36 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:50.456 19:29:36 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1698108 ']' 00:28:50.456 19:29:36 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1698108 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1698108 ']' 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1698108 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1698108 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1698108' 00:28:50.456 killing process with pid 1698108 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1698108 00:28:50.456 19:29:36 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1698108 00:28:52.990 19:29:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:52.990 19:29:38 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:52.990 19:29:38 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:52.990 19:29:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:52.990 19:29:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:52.990 19:29:38 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.990 19:29:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:52.990 19:29:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.896 19:29:40 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:54.896 00:28:54.896 real 0m25.047s 00:28:54.896 user 0m33.182s 00:28:54.896 sys 0m6.603s 00:28:54.896 19:29:40 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:54.896 19:29:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:54.896 ************************************ 00:28:54.896 END TEST nvmf_identify_passthru 00:28:54.896 ************************************ 00:28:54.896 19:29:40 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:54.896 19:29:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:54.896 19:29:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.896 19:29:40 -- common/autotest_common.sh@10 -- # set +x 00:28:54.896 ************************************ 00:28:54.896 START TEST nvmf_dif 00:28:54.896 ************************************ 00:28:54.896 19:29:40 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:54.896 * Looking for test storage... 00:28:54.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:54.896 19:29:40 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.896 19:29:40 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.896 19:29:40 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.896 19:29:40 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.896 19:29:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.896 19:29:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.896 19:29:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.896 19:29:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:54.896 19:29:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:54.896 19:29:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:54.896 19:29:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:54.896 19:29:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:54.896 19:29:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:54.896 19:29:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.896 19:29:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:54.896 19:29:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:54.896 19:29:40 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:54.896 19:29:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:01.471 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:01.471 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:01.471 Found net devices under 0000:af:00.0: cvl_0_0 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:01.471 Found net devices under 0000:af:00.1: cvl_0_1 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.471 19:29:47 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.731 19:29:47 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.731 19:29:47 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.731 19:29:47 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:01.731 19:29:47 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.731 19:29:47 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.731 19:29:47 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.731 19:29:47 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:01.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:29:01.731 00:29:01.731 --- 10.0.0.2 ping statistics --- 00:29:01.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.731 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:29:01.731 19:29:47 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:29:01.731 00:29:01.731 --- 10.0.0.1 ping statistics --- 00:29:01.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.731 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:01.731 19:29:47 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.731 19:29:47 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:01.731 19:29:47 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:01.731 19:29:47 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:05.028 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:05.028 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:05.287 19:29:51 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.287 19:29:51 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:05.287 19:29:51 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:05.287 19:29:51 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.287 19:29:51 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:05.288 19:29:51 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:05.288 19:29:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:05.288 19:29:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:05.288 19:29:51 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:05.288 19:29:51 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:05.288 19:29:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:05.288 19:29:51 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1704145 00:29:05.288 19:29:51 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:05.288 19:29:51 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1704145 00:29:05.288 19:29:51 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1704145 ']' 00:29:05.288 19:29:51 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.288 19:29:51 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.288 19:29:51 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.288 19:29:51 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.288 19:29:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:05.288 [2024-07-24 19:29:51.463004] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:29:05.288 [2024-07-24 19:29:51.463049] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.288 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.547 [2024-07-24 19:29:51.534673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.547 [2024-07-24 19:29:51.606409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.547 [2024-07-24 19:29:51.606449] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.547 [2024-07-24 19:29:51.606458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.547 [2024-07-24 19:29:51.606467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.547 [2024-07-24 19:29:51.606474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.547 [2024-07-24 19:29:51.606497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.115 19:29:52 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:06.115 19:29:52 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:29:06.115 19:29:52 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:06.115 19:29:52 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:06.115 19:29:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:06.115 19:29:52 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.115 19:29:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:06.115 19:29:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:06.115 19:29:52 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.115 19:29:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:06.115 [2024-07-24 19:29:52.297384] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.115 19:29:52 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.115 19:29:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:06.115 19:29:52 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:06.115 19:29:52 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:06.115 19:29:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:06.115 ************************************ 00:29:06.115 START TEST fio_dif_1_default 00:29:06.115 ************************************ 00:29:06.115 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:29:06.115 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:06.115 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:06.115 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:06.115 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:06.115 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:06.115 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:06.115 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.115 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:06.115 bdev_null0 00:29:06.115 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.115 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:06.115 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.116 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:06.116 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.116 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:06.116 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.116 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:06.375 [2024-07-24 19:29:52.365691] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.375 { 00:29:06.375 "params": { 00:29:06.375 "name": "Nvme$subsystem", 00:29:06.375 "trtype": "$TEST_TRANSPORT", 00:29:06.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.375 "adrfam": "ipv4", 00:29:06.375 "trsvcid": "$NVMF_PORT", 00:29:06.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.375 "hdgst": ${hdgst:-false}, 00:29:06.375 "ddgst": ${ddgst:-false} 00:29:06.375 }, 00:29:06.375 "method": "bdev_nvme_attach_controller" 00:29:06.375 } 00:29:06.375 EOF 00:29:06.375 )") 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:06.375 "params": { 00:29:06.375 "name": "Nvme0", 00:29:06.375 "trtype": "tcp", 00:29:06.375 "traddr": "10.0.0.2", 00:29:06.375 "adrfam": "ipv4", 00:29:06.375 "trsvcid": "4420", 00:29:06.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:06.375 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:06.375 "hdgst": false, 00:29:06.375 "ddgst": false 00:29:06.375 }, 00:29:06.375 "method": "bdev_nvme_attach_controller" 00:29:06.375 }' 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:06.375 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:06.376 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:06.376 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:06.376 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:06.376 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:06.376 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:06.376 19:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.635 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:06.635 fio-3.35 00:29:06.635 Starting 1 thread 00:29:06.635 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.842 00:29:18.842 filename0: (groupid=0, jobs=1): err= 0: pid=1704571: Wed Jul 24 19:30:03 2024 00:29:18.842 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10002msec) 00:29:18.842 slat (nsec): min=5571, max=30041, avg=5898.03, stdev=1143.89 00:29:18.842 clat (usec): min=724, max=44143, avg=21039.17, stdev=20180.88 00:29:18.842 lat (usec): min=729, max=44168, avg=21045.07, stdev=20180.84 00:29:18.842 clat percentiles (usec): 00:29:18.842 | 1.00th=[ 791], 5.00th=[ 799], 10.00th=[ 799], 20.00th=[ 807], 00:29:18.842 | 30.00th=[ 816], 40.00th=[ 824], 50.00th=[41157], 60.00th=[41157], 00:29:18.842 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:18.842 | 99.00th=[41157], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:29:18.842 | 99.99th=[44303] 00:29:18.842 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.26, stdev=20.18, samples=19 00:29:18.842 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:29:18.842 lat (usec) : 750=0.21%, 1000=49.68% 00:29:18.842 lat (msec) : 50=50.11% 00:29:18.842 cpu : usr=86.11%, sys=13.65%, ctx=14, majf=0, minf=225 00:29:18.842 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:18.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.842 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.842 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:18.842 00:29:18.842 Run status group 0 (all jobs): 00:29:18.842 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10002-10002msec 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.842 00:29:18.842 real 0m11.047s 00:29:18.842 user 0m16.820s 00:29:18.842 sys 0m1.688s 00:29:18.842 19:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:18.843 ************************************ 00:29:18.843 END TEST fio_dif_1_default 00:29:18.843 ************************************ 00:29:18.843 19:30:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:18.843 19:30:03 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:18.843 19:30:03 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:18.843 19:30:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:18.843 ************************************ 00:29:18.843 START TEST fio_dif_1_multi_subsystems 00:29:18.843 ************************************ 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:18.843 bdev_null0 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:18.843 [2024-07-24 19:30:03.500111] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:18.843 bdev_null1 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:18.843 { 00:29:18.843 "params": { 00:29:18.843 "name": "Nvme$subsystem", 00:29:18.843 "trtype": "$TEST_TRANSPORT", 00:29:18.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.843 "adrfam": "ipv4", 00:29:18.843 "trsvcid": "$NVMF_PORT", 00:29:18.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.843 "hdgst": ${hdgst:-false}, 00:29:18.843 "ddgst": ${ddgst:-false} 00:29:18.843 }, 00:29:18.843 "method": "bdev_nvme_attach_controller" 00:29:18.843 } 00:29:18.843 EOF 00:29:18.843 )") 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:18.843 { 00:29:18.843 "params": { 00:29:18.843 "name": "Nvme$subsystem", 00:29:18.843 "trtype": "$TEST_TRANSPORT", 00:29:18.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.843 "adrfam": "ipv4", 00:29:18.843 "trsvcid": "$NVMF_PORT", 00:29:18.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.843 "hdgst": ${hdgst:-false}, 00:29:18.843 "ddgst": ${ddgst:-false} 00:29:18.843 }, 00:29:18.843 "method": "bdev_nvme_attach_controller" 00:29:18.843 } 00:29:18.843 EOF 00:29:18.843 )") 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:18.843 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:18.843 "params": { 00:29:18.843 "name": "Nvme0", 00:29:18.843 "trtype": "tcp", 00:29:18.843 "traddr": "10.0.0.2", 00:29:18.843 "adrfam": "ipv4", 00:29:18.843 "trsvcid": "4420", 00:29:18.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:18.844 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:18.844 "hdgst": false, 00:29:18.844 "ddgst": false 00:29:18.844 }, 00:29:18.844 "method": "bdev_nvme_attach_controller" 00:29:18.844 },{ 00:29:18.844 "params": { 00:29:18.844 "name": "Nvme1", 00:29:18.844 "trtype": "tcp", 00:29:18.844 "traddr": "10.0.0.2", 00:29:18.844 "adrfam": "ipv4", 00:29:18.844 "trsvcid": "4420", 00:29:18.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:18.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:18.844 "hdgst": false, 00:29:18.844 "ddgst": false 00:29:18.844 }, 00:29:18.844 "method": "bdev_nvme_attach_controller" 00:29:18.844 }' 00:29:18.844 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:18.844 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:18.844 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:18.844 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:18.844 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:18.844 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:18.844 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:18.844 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:18.844 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:18.844 19:30:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:18.844 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:18.844 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:18.844 fio-3.35 00:29:18.844 Starting 2 threads 00:29:18.844 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.828 00:29:28.828 filename0: (groupid=0, jobs=1): err= 0: pid=1706917: Wed Jul 24 19:30:14 2024 00:29:28.828 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10032msec) 00:29:28.828 slat (nsec): min=5654, max=74310, avg=8747.10, stdev=4865.50 00:29:28.828 clat (usec): min=40799, max=42987, avg=41427.17, stdev=503.84 00:29:28.828 lat (usec): min=40804, max=43008, avg=41435.92, stdev=505.42 00:29:28.828 clat percentiles (usec): 00:29:28.828 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:28.828 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:29:28.828 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:28.828 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:29:28.828 | 99.99th=[42730] 00:29:28.828 bw ( KiB/s): min= 384, max= 416, per=49.88%, avg=385.60, stdev= 7.16, samples=20 00:29:28.828 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:29:28.828 lat (msec) : 50=100.00% 00:29:28.828 cpu : usr=93.67%, sys=6.09%, ctx=14, majf=0, minf=197 00:29:28.828 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:28.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.828 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:28.828 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:28.828 filename1: (groupid=0, jobs=1): err= 0: pid=1706918: Wed Jul 24 19:30:14 2024 00:29:28.828 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10031msec) 00:29:28.828 slat (nsec): min=5671, max=50891, avg=8942.65, stdev=5133.94 00:29:28.828 clat (usec): min=40798, max=42996, avg=41422.45, stdev=510.12 00:29:28.828 lat (usec): min=40804, max=43007, avg=41431.39, stdev=511.60 00:29:28.828 clat percentiles (usec): 00:29:28.828 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:28.828 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:29:28.828 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:28.828 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:29:28.828 | 99.99th=[43254] 00:29:28.828 bw ( KiB/s): min= 384, max= 416, per=49.88%, avg=385.60, stdev= 7.16, samples=20 00:29:28.828 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:29:28.828 lat (msec) : 50=100.00% 00:29:28.828 cpu : usr=94.58%, sys=5.18%, ctx=16, majf=0, minf=41 00:29:28.828 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:28.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.828 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:28.828 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:28.828 00:29:28.828 Run status group 0 (all jobs): 00:29:28.828 READ: bw=772KiB/s (790kB/s), 386KiB/s-386KiB/s (395kB/s-395kB/s), io=7744KiB (7930kB), run=10031-10032msec 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:28.828 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:28.829 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:28.829 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:28.829 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.829 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:28.829 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.829 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:28.829 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.829 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:28.829 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.829 00:29:28.829 real 0m11.485s 00:29:28.829 user 0m27.909s 00:29:28.829 sys 0m1.571s 00:29:28.829 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.829 19:30:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:28.829 ************************************ 00:29:28.829 END TEST fio_dif_1_multi_subsystems 00:29:28.829 ************************************ 00:29:28.829 19:30:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:28.829 19:30:14 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:28.829 19:30:14 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.829 19:30:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:28.829 ************************************ 00:29:28.829 START TEST fio_dif_rand_params 00:29:28.829 ************************************ 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:28.829 bdev_null0 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.829 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.089 [2024-07-24 19:30:15.074803] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.089 { 00:29:29.089 "params": { 00:29:29.089 "name": "Nvme$subsystem", 00:29:29.089 "trtype": "$TEST_TRANSPORT", 00:29:29.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.089 "adrfam": "ipv4", 00:29:29.089 "trsvcid": "$NVMF_PORT", 00:29:29.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.089 "hdgst": ${hdgst:-false}, 00:29:29.089 "ddgst": ${ddgst:-false} 00:29:29.089 }, 00:29:29.089 "method": "bdev_nvme_attach_controller" 00:29:29.089 } 00:29:29.089 EOF 00:29:29.089 )") 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:29.089 "params": { 00:29:29.089 "name": "Nvme0", 00:29:29.089 "trtype": "tcp", 00:29:29.089 "traddr": "10.0.0.2", 00:29:29.089 "adrfam": "ipv4", 00:29:29.089 "trsvcid": "4420", 00:29:29.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.089 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:29.089 "hdgst": false, 00:29:29.089 "ddgst": false 00:29:29.089 }, 00:29:29.089 "method": "bdev_nvme_attach_controller" 00:29:29.089 }' 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:29.089 19:30:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:29.349 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:29.349 ... 00:29:29.349 fio-3.35 00:29:29.349 Starting 3 threads 00:29:29.349 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.919 00:29:35.919 filename0: (groupid=0, jobs=1): err= 0: pid=1709098: Wed Jul 24 19:30:21 2024 00:29:35.919 read: IOPS=244, BW=30.5MiB/s (32.0MB/s)(153MiB/5025msec) 00:29:35.919 slat (nsec): min=5878, max=54977, avg=8915.70, stdev=2853.72 00:29:35.919 clat (usec): min=3766, max=91559, avg=12271.92, stdev=14462.37 00:29:35.919 lat (usec): min=3774, max=91571, avg=12280.83, stdev=14462.62 00:29:35.919 clat percentiles (usec): 00:29:35.919 | 1.00th=[ 4113], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5604], 00:29:35.919 | 30.00th=[ 6259], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7635], 00:29:35.919 | 70.00th=[ 8356], 80.00th=[ 9503], 90.00th=[48497], 95.00th=[49546], 00:29:35.919 | 99.00th=[51643], 99.50th=[53216], 99.90th=[91751], 99.95th=[91751], 00:29:35.919 | 99.99th=[91751] 00:29:35.919 bw ( KiB/s): min=19968, max=49664, per=31.79%, avg=31334.40, stdev=8954.07, samples=10 00:29:35.919 iops : min= 156, max= 388, avg=244.80, stdev=69.95, samples=10 00:29:35.919 lat (msec) : 4=0.81%, 10=83.62%, 20=3.34%, 50=7.82%, 100=4.40% 00:29:35.919 cpu : usr=92.99%, sys=6.67%, ctx=8, majf=0, minf=122 00:29:35.919 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.919 issued rwts: total=1227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.919 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:35.919 filename0: (groupid=0, jobs=1): err= 0: pid=1709099: Wed Jul 24 19:30:21 2024 00:29:35.919 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(171MiB/5003msec) 00:29:35.919 slat (nsec): min=5904, max=31918, avg=9209.83, stdev=2834.04 00:29:35.920 clat (usec): min=3615, max=53195, avg=10965.66, stdev=12715.67 00:29:35.920 lat (usec): min=3622, max=53209, avg=10974.87, stdev=12716.05 00:29:35.920 clat percentiles (usec): 00:29:35.920 | 1.00th=[ 4146], 5.00th=[ 4424], 10.00th=[ 4817], 20.00th=[ 5276], 00:29:35.920 | 30.00th=[ 5866], 40.00th=[ 6456], 50.00th=[ 6980], 60.00th=[ 7439], 00:29:35.920 | 70.00th=[ 8160], 80.00th=[ 9110], 90.00th=[11731], 95.00th=[49546], 00:29:35.920 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52691], 99.95th=[53216], 00:29:35.920 | 99.99th=[53216] 00:29:35.920 bw ( KiB/s): min=26880, max=43264, per=36.15%, avg=35640.89, stdev=5649.11, samples=9 00:29:35.920 iops : min= 210, max= 338, avg=278.44, stdev=44.13, samples=9 00:29:35.920 lat (msec) : 4=0.29%, 10=85.95%, 20=4.10%, 50=6.58%, 100=3.07% 00:29:35.920 cpu : usr=92.44%, sys=7.20%, ctx=10, majf=0, minf=83 00:29:35.920 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.920 issued rwts: total=1367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:35.920 filename0: (groupid=0, jobs=1): err= 0: pid=1709100: Wed Jul 24 19:30:21 2024 00:29:35.920 read: IOPS=254, BW=31.8MiB/s (33.4MB/s)(160MiB/5008msec) 00:29:35.920 slat (nsec): min=5900, max=25442, avg=8743.88, stdev=2530.39 00:29:35.920 clat (usec): min=3947, max=91304, avg=11758.53, stdev=13661.66 00:29:35.920 lat (usec): min=3954, max=91316, avg=11767.28, stdev=13661.92 00:29:35.920 clat percentiles (usec): 00:29:35.920 | 1.00th=[ 4293], 5.00th=[ 4686], 10.00th=[ 5014], 20.00th=[ 5538], 00:29:35.920 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 7046], 60.00th=[ 7635], 00:29:35.920 | 70.00th=[ 8455], 80.00th=[ 9634], 90.00th=[47973], 95.00th=[49546], 00:29:35.920 | 99.00th=[51643], 99.50th=[52167], 99.90th=[91751], 99.95th=[91751], 00:29:35.920 | 99.99th=[91751] 00:29:35.920 bw ( KiB/s): min=18432, max=41984, per=33.05%, avg=32581.70, stdev=6309.03, samples=10 00:29:35.920 iops : min= 144, max= 328, avg=254.50, stdev=49.27, samples=10 00:29:35.920 lat (msec) : 4=0.08%, 10=83.31%, 20=5.72%, 50=6.74%, 100=4.15% 00:29:35.920 cpu : usr=92.59%, sys=7.07%, ctx=13, majf=0, minf=118 00:29:35.920 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.920 issued rwts: total=1276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:35.920 00:29:35.920 Run status group 0 (all jobs): 00:29:35.920 READ: bw=96.3MiB/s (101MB/s), 30.5MiB/s-34.2MiB/s (32.0MB/s-35.8MB/s), io=484MiB (507MB), run=5003-5025msec 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 bdev_null0 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 [2024-07-24 19:30:21.380910] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 bdev_null1 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 bdev_null2 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.920 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.921 { 00:29:35.921 "params": { 00:29:35.921 "name": "Nvme$subsystem", 00:29:35.921 "trtype": "$TEST_TRANSPORT", 00:29:35.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.921 "adrfam": "ipv4", 00:29:35.921 "trsvcid": "$NVMF_PORT", 00:29:35.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.921 "hdgst": ${hdgst:-false}, 00:29:35.921 "ddgst": ${ddgst:-false} 00:29:35.921 }, 00:29:35.921 "method": "bdev_nvme_attach_controller" 00:29:35.921 } 00:29:35.921 EOF 00:29:35.921 )") 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.921 { 00:29:35.921 "params": { 00:29:35.921 "name": "Nvme$subsystem", 00:29:35.921 "trtype": "$TEST_TRANSPORT", 00:29:35.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.921 "adrfam": "ipv4", 00:29:35.921 "trsvcid": "$NVMF_PORT", 00:29:35.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.921 "hdgst": ${hdgst:-false}, 00:29:35.921 "ddgst": ${ddgst:-false} 00:29:35.921 }, 00:29:35.921 "method": "bdev_nvme_attach_controller" 00:29:35.921 } 00:29:35.921 EOF 00:29:35.921 )") 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.921 { 00:29:35.921 "params": { 00:29:35.921 "name": "Nvme$subsystem", 00:29:35.921 "trtype": "$TEST_TRANSPORT", 00:29:35.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.921 "adrfam": "ipv4", 00:29:35.921 "trsvcid": "$NVMF_PORT", 00:29:35.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.921 "hdgst": ${hdgst:-false}, 00:29:35.921 "ddgst": ${ddgst:-false} 00:29:35.921 }, 00:29:35.921 "method": "bdev_nvme_attach_controller" 00:29:35.921 } 00:29:35.921 EOF 00:29:35.921 )") 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:35.921 "params": { 00:29:35.921 "name": "Nvme0", 00:29:35.921 "trtype": "tcp", 00:29:35.921 "traddr": "10.0.0.2", 00:29:35.921 "adrfam": "ipv4", 00:29:35.921 "trsvcid": "4420", 00:29:35.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.921 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:35.921 "hdgst": false, 00:29:35.921 "ddgst": false 00:29:35.921 }, 00:29:35.921 "method": "bdev_nvme_attach_controller" 00:29:35.921 },{ 00:29:35.921 "params": { 00:29:35.921 "name": "Nvme1", 00:29:35.921 "trtype": "tcp", 00:29:35.921 "traddr": "10.0.0.2", 00:29:35.921 "adrfam": "ipv4", 00:29:35.921 "trsvcid": "4420", 00:29:35.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:35.921 "hdgst": false, 00:29:35.921 "ddgst": false 00:29:35.921 }, 00:29:35.921 "method": "bdev_nvme_attach_controller" 00:29:35.921 },{ 00:29:35.921 "params": { 00:29:35.921 "name": "Nvme2", 00:29:35.921 "trtype": "tcp", 00:29:35.921 "traddr": "10.0.0.2", 00:29:35.921 "adrfam": "ipv4", 00:29:35.921 "trsvcid": "4420", 00:29:35.921 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:35.921 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:35.921 "hdgst": false, 00:29:35.921 "ddgst": false 00:29:35.921 }, 00:29:35.921 "method": "bdev_nvme_attach_controller" 00:29:35.921 }' 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:35.921 19:30:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.921 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:35.921 ... 00:29:35.921 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:35.921 ... 00:29:35.921 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:35.921 ... 00:29:35.921 fio-3.35 00:29:35.921 Starting 24 threads 00:29:35.921 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.169 00:29:48.169 filename0: (groupid=0, jobs=1): err= 0: pid=1710293: Wed Jul 24 19:30:32 2024 00:29:48.169 read: IOPS=610, BW=2442KiB/s (2500kB/s)(23.9MiB/10012msec) 00:29:48.169 slat (nsec): min=7019, max=89082, avg=25646.23, stdev=13857.92 00:29:48.169 clat (usec): min=19933, max=29242, avg=26012.18, stdev=649.90 00:29:48.169 lat (usec): min=19942, max=29257, avg=26037.82, stdev=647.90 00:29:48.169 clat percentiles (usec): 00:29:48.169 | 1.00th=[24773], 5.00th=[25297], 10.00th=[25297], 20.00th=[25560], 00:29:48.169 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:29:48.169 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.169 | 99.00th=[27657], 99.50th=[27657], 99.90th=[29230], 99.95th=[29230], 00:29:48.169 | 99.99th=[29230] 00:29:48.169 bw ( KiB/s): min= 2304, max= 2560, per=4.19%, avg=2438.40, stdev=50.44, samples=20 00:29:48.169 iops : min= 576, max= 640, avg=609.60, stdev=12.61, samples=20 00:29:48.169 lat (msec) : 20=0.08%, 50=99.92% 00:29:48.169 cpu : usr=97.49%, sys=2.17%, ctx=14, majf=0, minf=9 00:29:48.169 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:48.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.169 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.169 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.169 filename0: (groupid=0, jobs=1): err= 0: pid=1710294: Wed Jul 24 19:30:32 2024 00:29:48.169 read: IOPS=608, BW=2433KiB/s (2491kB/s)(23.8MiB/10010msec) 00:29:48.169 slat (nsec): min=6454, max=86981, avg=30478.39, stdev=16675.14 00:29:48.169 clat (usec): min=11435, max=43229, avg=26024.26, stdev=1767.38 00:29:48.169 lat (usec): min=11442, max=43263, avg=26054.74, stdev=1766.76 00:29:48.169 clat percentiles (usec): 00:29:48.169 | 1.00th=[20841], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:29:48.169 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:29:48.169 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[27132], 00:29:48.169 | 99.00th=[33817], 99.50th=[35914], 99.90th=[42206], 99.95th=[43254], 00:29:48.169 | 99.99th=[43254] 00:29:48.169 bw ( KiB/s): min= 2304, max= 2560, per=4.18%, avg=2428.63, stdev=61.18, samples=19 00:29:48.169 iops : min= 576, max= 640, avg=607.16, stdev=15.29, samples=19 00:29:48.169 lat (msec) : 20=0.85%, 50=99.15% 00:29:48.169 cpu : usr=97.32%, sys=2.33%, ctx=15, majf=0, minf=9 00:29:48.169 IO depths : 1=5.5%, 2=11.1%, 4=23.1%, 8=52.9%, 16=7.4%, 32=0.0%, >=64=0.0% 00:29:48.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.169 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.169 issued rwts: total=6088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.169 filename0: (groupid=0, jobs=1): err= 0: pid=1710295: Wed Jul 24 19:30:32 2024 00:29:48.169 read: IOPS=609, BW=2437KiB/s (2495kB/s)(23.8MiB/10006msec) 00:29:48.169 slat (usec): min=6, max=101, avg=31.37, stdev=15.43 00:29:48.169 clat (usec): min=19362, max=47435, avg=25990.82, stdev=1051.70 00:29:48.169 lat (usec): min=19376, max=47452, avg=26022.19, stdev=1049.35 00:29:48.169 clat percentiles (usec): 00:29:48.169 | 1.00th=[24773], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:29:48.169 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:29:48.169 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.169 | 99.00th=[27657], 99.50th=[31851], 99.90th=[37487], 99.95th=[47449], 00:29:48.169 | 99.99th=[47449] 00:29:48.169 bw ( KiB/s): min= 2264, max= 2560, per=4.18%, avg=2432.00, stdev=50.67, samples=19 00:29:48.169 iops : min= 566, max= 640, avg=608.00, stdev=12.67, samples=19 00:29:48.169 lat (msec) : 20=0.26%, 50=99.74% 00:29:48.169 cpu : usr=97.43%, sys=2.22%, ctx=16, majf=0, minf=9 00:29:48.169 IO depths : 1=6.0%, 2=11.9%, 4=24.4%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:48.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.169 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.169 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.169 filename0: (groupid=0, jobs=1): err= 0: pid=1710296: Wed Jul 24 19:30:32 2024 00:29:48.169 read: IOPS=609, BW=2437KiB/s (2496kB/s)(23.8MiB/10005msec) 00:29:48.169 slat (nsec): min=6481, max=89510, avg=29268.80, stdev=15257.66 00:29:48.169 clat (usec): min=19500, max=47081, avg=25990.89, stdev=1087.72 00:29:48.169 lat (usec): min=19513, max=47108, avg=26020.15, stdev=1086.89 00:29:48.170 clat percentiles (usec): 00:29:48.170 | 1.00th=[24249], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:29:48.170 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:29:48.170 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.170 | 99.00th=[28181], 99.50th=[32113], 99.90th=[36963], 99.95th=[36963], 00:29:48.170 | 99.99th=[46924] 00:29:48.170 bw ( KiB/s): min= 2308, max= 2560, per=4.18%, avg=2432.21, stdev=44.95, samples=19 00:29:48.170 iops : min= 577, max= 640, avg=608.05, stdev=11.24, samples=19 00:29:48.170 lat (msec) : 20=0.31%, 50=99.69% 00:29:48.170 cpu : usr=97.49%, sys=2.16%, ctx=16, majf=0, minf=9 00:29:48.170 IO depths : 1=5.7%, 2=11.4%, 4=23.8%, 8=52.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:29:48.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.170 filename0: (groupid=0, jobs=1): err= 0: pid=1710297: Wed Jul 24 19:30:32 2024 00:29:48.170 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10005msec) 00:29:48.170 slat (nsec): min=6399, max=72389, avg=20768.85, stdev=11496.42 00:29:48.170 clat (usec): min=11140, max=44022, avg=25951.98, stdev=1187.76 00:29:48.170 lat (usec): min=11151, max=44036, avg=25972.75, stdev=1187.17 00:29:48.170 clat percentiles (usec): 00:29:48.170 | 1.00th=[21365], 5.00th=[25297], 10.00th=[25297], 20.00th=[25560], 00:29:48.170 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:29:48.170 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.170 | 99.00th=[27132], 99.50th=[27395], 99.90th=[27657], 99.95th=[38011], 00:29:48.170 | 99.99th=[43779] 00:29:48.170 bw ( KiB/s): min= 2432, max= 2560, per=4.22%, avg=2452.21, stdev=47.95, samples=19 00:29:48.170 iops : min= 608, max= 640, avg=613.05, stdev=11.99, samples=19 00:29:48.170 lat (msec) : 20=0.95%, 50=99.05% 00:29:48.170 cpu : usr=97.19%, sys=2.45%, ctx=19, majf=0, minf=9 00:29:48.170 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:48.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.170 filename0: (groupid=0, jobs=1): err= 0: pid=1710298: Wed Jul 24 19:30:32 2024 00:29:48.170 read: IOPS=624, BW=2496KiB/s (2556kB/s)(24.4MiB/10014msec) 00:29:48.170 slat (nsec): min=6400, max=92786, avg=18855.89, stdev=14761.90 00:29:48.170 clat (usec): min=9827, max=42517, avg=25496.17, stdev=3123.22 00:29:48.170 lat (usec): min=9836, max=42523, avg=25515.03, stdev=3124.17 00:29:48.170 clat percentiles (usec): 00:29:48.170 | 1.00th=[12649], 5.00th=[18482], 10.00th=[25035], 20.00th=[25560], 00:29:48.170 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:29:48.170 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[27132], 00:29:48.170 | 99.00th=[33817], 99.50th=[37487], 99.90th=[42206], 99.95th=[42206], 00:29:48.170 | 99.99th=[42730] 00:29:48.170 bw ( KiB/s): min= 2400, max= 2752, per=4.29%, avg=2493.20, stdev=96.53, samples=20 00:29:48.170 iops : min= 600, max= 688, avg=623.30, stdev=24.13, samples=20 00:29:48.170 lat (msec) : 10=0.14%, 20=7.23%, 50=92.62% 00:29:48.170 cpu : usr=97.08%, sys=2.55%, ctx=25, majf=0, minf=11 00:29:48.170 IO depths : 1=4.5%, 2=9.6%, 4=21.4%, 8=56.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:29:48.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 complete : 0=0.0%, 4=93.3%, 8=1.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 issued rwts: total=6249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.170 filename0: (groupid=0, jobs=1): err= 0: pid=1710299: Wed Jul 24 19:30:32 2024 00:29:48.170 read: IOPS=609, BW=2436KiB/s (2495kB/s)(23.8MiB/10008msec) 00:29:48.170 slat (nsec): min=7309, max=85814, avg=24525.89, stdev=12572.66 00:29:48.170 clat (usec): min=19516, max=37248, avg=26057.45, stdev=864.21 00:29:48.170 lat (usec): min=19531, max=37273, avg=26081.98, stdev=862.96 00:29:48.170 clat percentiles (usec): 00:29:48.170 | 1.00th=[24773], 5.00th=[25297], 10.00th=[25560], 20.00th=[25560], 00:29:48.170 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:29:48.170 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.170 | 99.00th=[27657], 99.50th=[31851], 99.90th=[36963], 99.95th=[36963], 00:29:48.170 | 99.99th=[37487] 00:29:48.170 bw ( KiB/s): min= 2304, max= 2560, per=4.18%, avg=2432.00, stdev=60.34, samples=19 00:29:48.170 iops : min= 576, max= 640, avg=608.00, stdev=15.08, samples=19 00:29:48.170 lat (msec) : 20=0.26%, 50=99.74% 00:29:48.170 cpu : usr=97.33%, sys=2.32%, ctx=13, majf=0, minf=9 00:29:48.170 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:48.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.170 filename0: (groupid=0, jobs=1): err= 0: pid=1710300: Wed Jul 24 19:30:32 2024 00:29:48.170 read: IOPS=616, BW=2466KiB/s (2525kB/s)(24.1MiB/10003msec) 00:29:48.170 slat (nsec): min=6391, max=82846, avg=19810.65, stdev=12568.06 00:29:48.170 clat (usec): min=10206, max=41149, avg=25809.17, stdev=1830.07 00:29:48.170 lat (usec): min=10259, max=41162, avg=25828.98, stdev=1830.25 00:29:48.170 clat percentiles (usec): 00:29:48.170 | 1.00th=[16581], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:29:48.170 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:29:48.170 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.170 | 99.00th=[28181], 99.50th=[30540], 99.90th=[33162], 99.95th=[41157], 00:29:48.170 | 99.99th=[41157] 00:29:48.170 bw ( KiB/s): min= 2384, max= 2912, per=4.24%, avg=2468.21, stdev=115.60, samples=19 00:29:48.170 iops : min= 596, max= 728, avg=617.05, stdev=28.90, samples=19 00:29:48.170 lat (msec) : 20=2.22%, 50=97.78% 00:29:48.170 cpu : usr=96.68%, sys=2.95%, ctx=20, majf=0, minf=9 00:29:48.170 IO depths : 1=5.8%, 2=11.6%, 4=23.5%, 8=52.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:29:48.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 issued rwts: total=6166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.170 filename1: (groupid=0, jobs=1): err= 0: pid=1710301: Wed Jul 24 19:30:32 2024 00:29:48.170 read: IOPS=610, BW=2443KiB/s (2502kB/s)(23.9MiB/10003msec) 00:29:48.170 slat (nsec): min=6454, max=82472, avg=23248.76, stdev=12355.15 00:29:48.170 clat (usec): min=3566, max=51133, avg=25973.86, stdev=1994.27 00:29:48.170 lat (usec): min=3573, max=51150, avg=25997.11, stdev=1994.38 00:29:48.170 clat percentiles (usec): 00:29:48.170 | 1.00th=[24511], 5.00th=[25297], 10.00th=[25297], 20.00th=[25560], 00:29:48.170 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:29:48.170 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.170 | 99.00th=[27395], 99.50th=[27657], 99.90th=[51119], 99.95th=[51119], 00:29:48.170 | 99.99th=[51119] 00:29:48.170 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2431.68, stdev=84.84, samples=19 00:29:48.170 iops : min= 544, max= 640, avg=607.89, stdev=21.17, samples=19 00:29:48.170 lat (msec) : 4=0.23%, 10=0.26%, 20=0.26%, 50=98.99%, 100=0.26% 00:29:48.170 cpu : usr=97.30%, sys=2.36%, ctx=13, majf=0, minf=9 00:29:48.170 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:48.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 issued rwts: total=6110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.170 filename1: (groupid=0, jobs=1): err= 0: pid=1710302: Wed Jul 24 19:30:32 2024 00:29:48.170 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10005msec) 00:29:48.170 slat (nsec): min=3868, max=88676, avg=26758.22, stdev=14045.17 00:29:48.170 clat (usec): min=9830, max=34082, avg=25914.70, stdev=1307.10 00:29:48.170 lat (usec): min=9834, max=34113, avg=25941.46, stdev=1307.36 00:29:48.170 clat percentiles (usec): 00:29:48.170 | 1.00th=[19792], 5.00th=[25297], 10.00th=[25297], 20.00th=[25560], 00:29:48.170 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:29:48.170 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.170 | 99.00th=[27657], 99.50th=[27919], 99.90th=[33424], 99.95th=[33817], 00:29:48.170 | 99.99th=[34341] 00:29:48.170 bw ( KiB/s): min= 2432, max= 2560, per=4.22%, avg=2452.21, stdev=47.95, samples=19 00:29:48.170 iops : min= 608, max= 640, avg=613.05, stdev=11.99, samples=19 00:29:48.170 lat (msec) : 10=0.26%, 20=0.80%, 50=98.94% 00:29:48.170 cpu : usr=96.97%, sys=2.69%, ctx=29, majf=0, minf=9 00:29:48.170 IO depths : 1=5.9%, 2=11.8%, 4=24.2%, 8=51.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:48.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.170 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.170 filename1: (groupid=0, jobs=1): err= 0: pid=1710303: Wed Jul 24 19:30:32 2024 00:29:48.170 read: IOPS=610, BW=2442KiB/s (2500kB/s)(23.9MiB/10012msec) 00:29:48.171 slat (nsec): min=6885, max=89604, avg=30302.42, stdev=14203.90 00:29:48.171 clat (usec): min=19246, max=41171, avg=25953.73, stdev=768.39 00:29:48.171 lat (usec): min=19266, max=41186, avg=25984.03, stdev=766.98 00:29:48.171 clat percentiles (usec): 00:29:48.171 | 1.00th=[24773], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:29:48.171 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:29:48.171 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.171 | 99.00th=[27395], 99.50th=[27657], 99.90th=[29230], 99.95th=[29230], 00:29:48.171 | 99.99th=[41157] 00:29:48.171 bw ( KiB/s): min= 2304, max= 2560, per=4.19%, avg=2438.40, stdev=50.44, samples=20 00:29:48.171 iops : min= 576, max= 640, avg=609.60, stdev=12.61, samples=20 00:29:48.171 lat (msec) : 20=0.43%, 50=99.57% 00:29:48.171 cpu : usr=97.41%, sys=2.23%, ctx=17, majf=0, minf=9 00:29:48.171 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:48.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.171 filename1: (groupid=0, jobs=1): err= 0: pid=1710304: Wed Jul 24 19:30:32 2024 00:29:48.171 read: IOPS=610, BW=2441KiB/s (2499kB/s)(23.9MiB/10016msec) 00:29:48.171 slat (nsec): min=4140, max=72582, avg=22388.21, stdev=12007.78 00:29:48.171 clat (usec): min=19402, max=34545, avg=26026.86, stdev=733.22 00:29:48.171 lat (usec): min=19408, max=34558, avg=26049.24, stdev=731.57 00:29:48.171 clat percentiles (usec): 00:29:48.171 | 1.00th=[24773], 5.00th=[25297], 10.00th=[25560], 20.00th=[25560], 00:29:48.171 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:29:48.171 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.171 | 99.00th=[27395], 99.50th=[27395], 99.90th=[34341], 99.95th=[34341], 00:29:48.171 | 99.99th=[34341] 00:29:48.171 bw ( KiB/s): min= 2304, max= 2560, per=4.20%, avg=2441.85, stdev=52.30, samples=20 00:29:48.171 iops : min= 576, max= 640, avg=610.45, stdev=13.06, samples=20 00:29:48.171 lat (msec) : 20=0.16%, 50=99.84% 00:29:48.171 cpu : usr=97.51%, sys=2.14%, ctx=13, majf=0, minf=9 00:29:48.171 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:48.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.171 filename1: (groupid=0, jobs=1): err= 0: pid=1710305: Wed Jul 24 19:30:32 2024 00:29:48.171 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10005msec) 00:29:48.171 slat (nsec): min=6550, max=85838, avg=23998.01, stdev=13550.39 00:29:48.171 clat (usec): min=10581, max=30816, avg=25936.99, stdev=1252.99 00:29:48.171 lat (usec): min=10598, max=30830, avg=25960.98, stdev=1252.32 00:29:48.171 clat percentiles (usec): 00:29:48.171 | 1.00th=[20317], 5.00th=[25297], 10.00th=[25297], 20.00th=[25560], 00:29:48.171 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:29:48.171 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.171 | 99.00th=[27657], 99.50th=[27657], 99.90th=[29492], 99.95th=[30802], 00:29:48.171 | 99.99th=[30802] 00:29:48.171 bw ( KiB/s): min= 2432, max= 2560, per=4.22%, avg=2452.21, stdev=47.95, samples=19 00:29:48.171 iops : min= 608, max= 640, avg=613.05, stdev=11.99, samples=19 00:29:48.171 lat (msec) : 20=0.88%, 50=99.12% 00:29:48.171 cpu : usr=96.98%, sys=2.67%, ctx=24, majf=0, minf=9 00:29:48.171 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:48.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.171 filename1: (groupid=0, jobs=1): err= 0: pid=1710306: Wed Jul 24 19:30:32 2024 00:29:48.171 read: IOPS=601, BW=2406KiB/s (2463kB/s)(23.5MiB/10010msec) 00:29:48.171 slat (nsec): min=6391, max=88048, avg=22289.49, stdev=16320.15 00:29:48.171 clat (usec): min=9707, max=52640, avg=26459.43, stdev=4003.95 00:29:48.171 lat (usec): min=9727, max=52661, avg=26481.72, stdev=4002.68 00:29:48.171 clat percentiles (usec): 00:29:48.171 | 1.00th=[14484], 5.00th=[20579], 10.00th=[24511], 20.00th=[25560], 00:29:48.171 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:29:48.171 | 70.00th=[26346], 80.00th=[26870], 90.00th=[30278], 95.00th=[34341], 00:29:48.171 | 99.00th=[41157], 99.50th=[43254], 99.90th=[46924], 99.95th=[52691], 00:29:48.171 | 99.99th=[52691] 00:29:48.171 bw ( KiB/s): min= 2288, max= 2544, per=4.12%, avg=2397.47, stdev=68.30, samples=19 00:29:48.171 iops : min= 572, max= 636, avg=599.37, stdev=17.08, samples=19 00:29:48.171 lat (msec) : 10=0.10%, 20=4.22%, 50=95.60%, 100=0.08% 00:29:48.171 cpu : usr=97.25%, sys=2.40%, ctx=15, majf=0, minf=9 00:29:48.171 IO depths : 1=0.3%, 2=2.4%, 4=12.6%, 8=70.2%, 16=14.6%, 32=0.0%, >=64=0.0% 00:29:48.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 complete : 0=0.0%, 4=91.6%, 8=4.9%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 issued rwts: total=6020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.171 filename1: (groupid=0, jobs=1): err= 0: pid=1710307: Wed Jul 24 19:30:32 2024 00:29:48.171 read: IOPS=607, BW=2430KiB/s (2489kB/s)(23.7MiB/10003msec) 00:29:48.171 slat (nsec): min=6451, max=86062, avg=21845.12, stdev=12957.04 00:29:48.171 clat (usec): min=13481, max=44291, avg=26168.58, stdev=2314.57 00:29:48.171 lat (usec): min=13493, max=44308, avg=26190.42, stdev=2313.61 00:29:48.171 clat percentiles (usec): 00:29:48.171 | 1.00th=[15664], 5.00th=[25035], 10.00th=[25560], 20.00th=[25560], 00:29:48.171 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:29:48.171 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[27132], 00:29:48.171 | 99.00th=[37487], 99.50th=[38011], 99.90th=[43779], 99.95th=[44303], 00:29:48.171 | 99.99th=[44303] 00:29:48.171 bw ( KiB/s): min= 2304, max= 2560, per=4.18%, avg=2431.16, stdev=56.81, samples=19 00:29:48.171 iops : min= 576, max= 640, avg=607.79, stdev=14.20, samples=19 00:29:48.171 lat (msec) : 20=1.88%, 50=98.12% 00:29:48.171 cpu : usr=97.19%, sys=2.46%, ctx=19, majf=0, minf=9 00:29:48.171 IO depths : 1=3.3%, 2=7.0%, 4=16.3%, 8=62.6%, 16=10.8%, 32=0.0%, >=64=0.0% 00:29:48.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 complete : 0=0.0%, 4=92.4%, 8=3.5%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 issued rwts: total=6078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.171 filename1: (groupid=0, jobs=1): err= 0: pid=1710308: Wed Jul 24 19:30:32 2024 00:29:48.171 read: IOPS=645, BW=2580KiB/s (2642kB/s)(25.2MiB/10013msec) 00:29:48.171 slat (nsec): min=6137, max=81553, avg=31426.61, stdev=16073.92 00:29:48.171 clat (usec): min=9237, max=43401, avg=24563.13, stdev=3948.87 00:29:48.171 lat (usec): min=9263, max=43445, avg=24594.56, stdev=3954.20 00:29:48.171 clat percentiles (usec): 00:29:48.171 | 1.00th=[13042], 5.00th=[15401], 10.00th=[17957], 20.00th=[23987], 00:29:48.171 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[25822], 00:29:48.171 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[27132], 00:29:48.171 | 99.00th=[35914], 99.50th=[36439], 99.90th=[40109], 99.95th=[43254], 00:29:48.171 | 99.99th=[43254] 00:29:48.171 bw ( KiB/s): min= 2432, max= 3280, per=4.43%, avg=2577.20, stdev=248.05, samples=20 00:29:48.171 iops : min= 608, max= 820, avg=644.30, stdev=62.01, samples=20 00:29:48.171 lat (msec) : 10=0.09%, 20=14.41%, 50=85.49% 00:29:48.171 cpu : usr=97.93%, sys=1.63%, ctx=128, majf=0, minf=9 00:29:48.171 IO depths : 1=3.2%, 2=7.2%, 4=18.6%, 8=60.2%, 16=10.8%, 32=0.0%, >=64=0.0% 00:29:48.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 complete : 0=0.0%, 4=92.9%, 8=2.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 issued rwts: total=6459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.171 filename2: (groupid=0, jobs=1): err= 0: pid=1710309: Wed Jul 24 19:30:32 2024 00:29:48.171 read: IOPS=609, BW=2437KiB/s (2495kB/s)(23.8MiB/10006msec) 00:29:48.171 slat (nsec): min=6436, max=82863, avg=29318.62, stdev=14827.56 00:29:48.171 clat (usec): min=19479, max=37283, avg=25988.79, stdev=928.93 00:29:48.171 lat (usec): min=19500, max=37303, avg=26018.11, stdev=927.55 00:29:48.171 clat percentiles (usec): 00:29:48.171 | 1.00th=[24773], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:29:48.171 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:29:48.171 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.171 | 99.00th=[27657], 99.50th=[31065], 99.90th=[36963], 99.95th=[37487], 00:29:48.171 | 99.99th=[37487] 00:29:48.171 bw ( KiB/s): min= 2304, max= 2560, per=4.18%, avg=2432.00, stdev=42.67, samples=19 00:29:48.171 iops : min= 576, max= 640, avg=608.00, stdev=10.67, samples=19 00:29:48.171 lat (msec) : 20=0.36%, 50=99.64% 00:29:48.171 cpu : usr=97.14%, sys=2.49%, ctx=17, majf=0, minf=9 00:29:48.171 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:48.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.171 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.172 filename2: (groupid=0, jobs=1): err= 0: pid=1710310: Wed Jul 24 19:30:32 2024 00:29:48.172 read: IOPS=553, BW=2212KiB/s (2265kB/s)(21.6MiB/10002msec) 00:29:48.172 slat (nsec): min=6423, max=87901, avg=24019.44, stdev=15301.86 00:29:48.172 clat (usec): min=6596, max=50594, avg=28725.18, stdev=5262.04 00:29:48.172 lat (usec): min=6603, max=50616, avg=28749.20, stdev=5258.21 00:29:48.172 clat percentiles (usec): 00:29:48.172 | 1.00th=[21890], 5.00th=[25035], 10.00th=[25560], 20.00th=[25822], 00:29:48.172 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:29:48.172 | 70.00th=[27657], 80.00th=[33424], 90.00th=[37487], 95.00th=[42730], 00:29:48.172 | 99.00th=[43254], 99.50th=[43779], 99.90th=[50594], 99.95th=[50594], 00:29:48.172 | 99.99th=[50594] 00:29:48.172 bw ( KiB/s): min= 1792, max= 2560, per=3.77%, avg=2194.21, stdev=283.26, samples=19 00:29:48.172 iops : min= 448, max= 640, avg=548.53, stdev=70.78, samples=19 00:29:48.172 lat (msec) : 10=0.07%, 20=0.65%, 50=98.99%, 100=0.29% 00:29:48.172 cpu : usr=97.12%, sys=2.54%, ctx=18, majf=0, minf=10 00:29:48.172 IO depths : 1=3.5%, 2=7.2%, 4=21.3%, 8=58.6%, 16=9.4%, 32=0.0%, >=64=0.0% 00:29:48.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 complete : 0=0.0%, 4=93.9%, 8=0.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 issued rwts: total=5532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.172 filename2: (groupid=0, jobs=1): err= 0: pid=1710311: Wed Jul 24 19:30:32 2024 00:29:48.172 read: IOPS=610, BW=2442KiB/s (2500kB/s)(23.9MiB/10012msec) 00:29:48.172 slat (nsec): min=6414, max=63066, avg=11240.58, stdev=7160.99 00:29:48.172 clat (usec): min=18957, max=28561, avg=26099.21, stdev=709.67 00:29:48.172 lat (usec): min=18964, max=28568, avg=26110.45, stdev=709.19 00:29:48.172 clat percentiles (usec): 00:29:48.172 | 1.00th=[25035], 5.00th=[25560], 10.00th=[25560], 20.00th=[25560], 00:29:48.172 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:29:48.172 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.172 | 99.00th=[27657], 99.50th=[27657], 99.90th=[28443], 99.95th=[28443], 00:29:48.172 | 99.99th=[28443] 00:29:48.172 bw ( KiB/s): min= 2360, max= 2560, per=4.20%, avg=2441.20, stdev=43.68, samples=20 00:29:48.172 iops : min= 590, max= 640, avg=610.30, stdev=10.92, samples=20 00:29:48.172 lat (msec) : 20=0.52%, 50=99.48% 00:29:48.172 cpu : usr=97.45%, sys=2.21%, ctx=10, majf=0, minf=9 00:29:48.172 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:48.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.172 filename2: (groupid=0, jobs=1): err= 0: pid=1710312: Wed Jul 24 19:30:32 2024 00:29:48.172 read: IOPS=536, BW=2147KiB/s (2198kB/s)(21.0MiB/10003msec) 00:29:48.172 slat (nsec): min=6391, max=88196, avg=22188.23, stdev=16215.88 00:29:48.172 clat (usec): min=3895, max=65671, avg=29708.83, stdev=5308.24 00:29:48.172 lat (usec): min=3909, max=65696, avg=29731.02, stdev=5306.65 00:29:48.172 clat percentiles (usec): 00:29:48.172 | 1.00th=[19792], 5.00th=[25560], 10.00th=[25822], 20.00th=[26084], 00:29:48.172 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[29230], 00:29:48.172 | 70.00th=[32900], 80.00th=[36439], 90.00th=[37487], 95.00th=[38011], 00:29:48.172 | 99.00th=[39584], 99.50th=[45876], 99.90th=[50594], 99.95th=[65274], 00:29:48.172 | 99.99th=[65799] 00:29:48.172 bw ( KiB/s): min= 1792, max= 2480, per=3.65%, avg=2122.74, stdev=287.05, samples=19 00:29:48.172 iops : min= 448, max= 620, avg=530.68, stdev=71.76, samples=19 00:29:48.172 lat (msec) : 4=0.11%, 10=0.07%, 20=0.91%, 50=98.60%, 100=0.30% 00:29:48.172 cpu : usr=97.05%, sys=2.60%, ctx=14, majf=0, minf=10 00:29:48.172 IO depths : 1=0.1%, 2=0.1%, 4=8.4%, 8=75.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:29:48.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 complete : 0=0.0%, 4=91.5%, 8=6.0%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 issued rwts: total=5368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.172 filename2: (groupid=0, jobs=1): err= 0: pid=1710313: Wed Jul 24 19:30:32 2024 00:29:48.172 read: IOPS=609, BW=2436KiB/s (2495kB/s)(23.8MiB/10008msec) 00:29:48.172 slat (nsec): min=5783, max=87733, avg=29450.47, stdev=14525.73 00:29:48.172 clat (usec): min=19364, max=50570, avg=26016.94, stdev=1156.71 00:29:48.172 lat (usec): min=19379, max=50586, avg=26046.39, stdev=1155.12 00:29:48.172 clat percentiles (usec): 00:29:48.172 | 1.00th=[23200], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:29:48.172 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:29:48.172 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:29:48.172 | 99.00th=[29754], 99.50th=[30802], 99.90th=[39584], 99.95th=[39584], 00:29:48.172 | 99.99th=[50594] 00:29:48.172 bw ( KiB/s): min= 2304, max= 2560, per=4.18%, avg=2432.00, stdev=42.67, samples=19 00:29:48.172 iops : min= 576, max= 640, avg=608.00, stdev=10.67, samples=19 00:29:48.172 lat (msec) : 20=0.36%, 50=99.62%, 100=0.02% 00:29:48.172 cpu : usr=97.39%, sys=2.27%, ctx=14, majf=0, minf=9 00:29:48.172 IO depths : 1=5.6%, 2=11.6%, 4=24.5%, 8=51.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:29:48.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.172 filename2: (groupid=0, jobs=1): err= 0: pid=1710314: Wed Jul 24 19:30:32 2024 00:29:48.172 read: IOPS=606, BW=2424KiB/s (2482kB/s)(23.7MiB/10002msec) 00:29:48.172 slat (nsec): min=6353, max=85356, avg=27439.88, stdev=15976.37 00:29:48.172 clat (usec): min=5553, max=58650, avg=26137.85, stdev=2840.54 00:29:48.172 lat (usec): min=5560, max=58667, avg=26165.29, stdev=2839.82 00:29:48.172 clat percentiles (usec): 00:29:48.172 | 1.00th=[20317], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:29:48.172 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:29:48.172 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[27132], 00:29:48.172 | 99.00th=[38536], 99.50th=[38536], 99.90th=[58459], 99.95th=[58459], 00:29:48.172 | 99.99th=[58459] 00:29:48.172 bw ( KiB/s): min= 2048, max= 2560, per=4.15%, avg=2410.68, stdev=96.73, samples=19 00:29:48.172 iops : min= 512, max= 640, avg=602.63, stdev=24.17, samples=19 00:29:48.172 lat (msec) : 10=0.53%, 20=0.45%, 50=98.76%, 100=0.26% 00:29:48.172 cpu : usr=97.86%, sys=1.80%, ctx=15, majf=0, minf=9 00:29:48.172 IO depths : 1=5.5%, 2=11.1%, 4=22.9%, 8=53.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:29:48.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 issued rwts: total=6062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.172 filename2: (groupid=0, jobs=1): err= 0: pid=1710315: Wed Jul 24 19:30:32 2024 00:29:48.172 read: IOPS=610, BW=2441KiB/s (2499kB/s)(23.9MiB/10013msec) 00:29:48.172 slat (nsec): min=6422, max=85174, avg=20455.04, stdev=10869.97 00:29:48.172 clat (usec): min=8963, max=47326, avg=26046.54, stdev=2924.21 00:29:48.172 lat (usec): min=8971, max=47344, avg=26067.00, stdev=2925.05 00:29:48.172 clat percentiles (usec): 00:29:48.172 | 1.00th=[13304], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:29:48.172 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:29:48.172 | 70.00th=[26346], 80.00th=[26346], 90.00th=[26608], 95.00th=[27395], 00:29:48.172 | 99.00th=[39584], 99.50th=[42730], 99.90th=[46924], 99.95th=[47449], 00:29:48.172 | 99.99th=[47449] 00:29:48.172 bw ( KiB/s): min= 2308, max= 2538, per=4.20%, avg=2439.10, stdev=45.92, samples=20 00:29:48.172 iops : min= 577, max= 634, avg=609.75, stdev=11.42, samples=20 00:29:48.172 lat (msec) : 10=0.65%, 20=2.18%, 50=97.17% 00:29:48.172 cpu : usr=97.21%, sys=2.42%, ctx=27, majf=0, minf=9 00:29:48.172 IO depths : 1=4.6%, 2=9.4%, 4=20.3%, 8=56.9%, 16=8.8%, 32=0.0%, >=64=0.0% 00:29:48.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 complete : 0=0.0%, 4=93.1%, 8=2.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.172 issued rwts: total=6110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.172 filename2: (groupid=0, jobs=1): err= 0: pid=1710316: Wed Jul 24 19:30:32 2024 00:29:48.172 read: IOPS=602, BW=2410KiB/s (2468kB/s)(23.6MiB/10008msec) 00:29:48.172 slat (nsec): min=5741, max=93787, avg=18918.51, stdev=13646.12 00:29:48.172 clat (usec): min=7531, max=55651, avg=26464.69, stdev=3291.89 00:29:48.172 lat (usec): min=7538, max=55668, avg=26483.61, stdev=3291.31 00:29:48.172 clat percentiles (usec): 00:29:48.172 | 1.00th=[16057], 5.00th=[23987], 10.00th=[25560], 20.00th=[25822], 00:29:48.172 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:29:48.172 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27395], 95.00th=[32900], 00:29:48.172 | 99.00th=[38011], 99.50th=[40633], 99.90th=[50594], 99.95th=[50594], 00:29:48.172 | 99.99th=[55837] 00:29:48.173 bw ( KiB/s): min= 2176, max= 2496, per=4.13%, avg=2401.68, stdev=90.02, samples=19 00:29:48.173 iops : min= 544, max= 624, avg=600.42, stdev=22.51, samples=19 00:29:48.173 lat (msec) : 10=0.20%, 20=2.60%, 50=97.06%, 100=0.13% 00:29:48.173 cpu : usr=97.27%, sys=2.38%, ctx=17, majf=0, minf=9 00:29:48.173 IO depths : 1=0.3%, 2=1.3%, 4=6.5%, 8=75.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:29:48.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.173 complete : 0=0.0%, 4=90.7%, 8=7.1%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.173 issued rwts: total=6030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.173 00:29:48.173 Run status group 0 (all jobs): 00:29:48.173 READ: bw=56.8MiB/s (59.5MB/s), 2147KiB/s-2580KiB/s (2198kB/s-2642kB/s), io=569MiB (596MB), run=10002-10016msec 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 bdev_null0 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 [2024-07-24 19:30:33.162982] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 bdev_null1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.173 19:30:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.173 { 00:29:48.173 "params": { 00:29:48.173 "name": "Nvme$subsystem", 00:29:48.173 "trtype": "$TEST_TRANSPORT", 00:29:48.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.173 "adrfam": "ipv4", 00:29:48.173 "trsvcid": "$NVMF_PORT", 00:29:48.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.173 "hdgst": ${hdgst:-false}, 00:29:48.173 "ddgst": ${ddgst:-false} 00:29:48.173 }, 00:29:48.173 "method": "bdev_nvme_attach_controller" 00:29:48.173 } 00:29:48.173 EOF 00:29:48.173 )") 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.174 { 00:29:48.174 "params": { 00:29:48.174 "name": "Nvme$subsystem", 00:29:48.174 "trtype": "$TEST_TRANSPORT", 00:29:48.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.174 "adrfam": "ipv4", 00:29:48.174 "trsvcid": "$NVMF_PORT", 00:29:48.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.174 "hdgst": ${hdgst:-false}, 00:29:48.174 "ddgst": ${ddgst:-false} 00:29:48.174 }, 00:29:48.174 "method": "bdev_nvme_attach_controller" 00:29:48.174 } 00:29:48.174 EOF 00:29:48.174 )") 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:48.174 "params": { 00:29:48.174 "name": "Nvme0", 00:29:48.174 "trtype": "tcp", 00:29:48.174 "traddr": "10.0.0.2", 00:29:48.174 "adrfam": "ipv4", 00:29:48.174 "trsvcid": "4420", 00:29:48.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:48.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:48.174 "hdgst": false, 00:29:48.174 "ddgst": false 00:29:48.174 }, 00:29:48.174 "method": "bdev_nvme_attach_controller" 00:29:48.174 },{ 00:29:48.174 "params": { 00:29:48.174 "name": "Nvme1", 00:29:48.174 "trtype": "tcp", 00:29:48.174 "traddr": "10.0.0.2", 00:29:48.174 "adrfam": "ipv4", 00:29:48.174 "trsvcid": "4420", 00:29:48.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.174 "hdgst": false, 00:29:48.174 "ddgst": false 00:29:48.174 }, 00:29:48.174 "method": "bdev_nvme_attach_controller" 00:29:48.174 }' 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:48.174 19:30:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.174 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:48.174 ... 00:29:48.174 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:48.174 ... 00:29:48.174 fio-3.35 00:29:48.174 Starting 4 threads 00:29:48.174 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.446 00:29:53.446 filename0: (groupid=0, jobs=1): err= 0: pid=1712290: Wed Jul 24 19:30:39 2024 00:29:53.446 read: IOPS=2754, BW=21.5MiB/s (22.6MB/s)(109MiB/5044msec) 00:29:53.446 slat (nsec): min=2758, max=24720, avg=8582.43, stdev=2786.96 00:29:53.446 clat (usec): min=1117, max=50681, avg=2867.74, stdev=1407.62 00:29:53.446 lat (usec): min=1128, max=50691, avg=2876.32, stdev=1407.50 00:29:53.446 clat percentiles (usec): 00:29:53.447 | 1.00th=[ 2008], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:29:53.447 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2868], 00:29:53.447 | 70.00th=[ 2900], 80.00th=[ 3064], 90.00th=[ 3490], 95.00th=[ 3720], 00:29:53.447 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 4752], 99.95th=[50594], 00:29:53.447 | 99.99th=[50594] 00:29:53.447 bw ( KiB/s): min=19984, max=23424, per=25.68%, avg=22220.80, stdev=925.66, samples=10 00:29:53.447 iops : min= 2498, max= 2928, avg=2777.60, stdev=115.71, samples=10 00:29:53.447 lat (msec) : 2=0.99%, 4=97.06%, 10=1.86%, 50=0.03%, 100=0.06% 00:29:53.447 cpu : usr=92.74%, sys=6.96%, ctx=14, majf=0, minf=28 00:29:53.447 IO depths : 1=0.3%, 2=1.8%, 4=68.3%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:53.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.447 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.447 issued rwts: total=13896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:53.447 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:53.447 filename0: (groupid=0, jobs=1): err= 0: pid=1712291: Wed Jul 24 19:30:39 2024 00:29:53.447 read: IOPS=2767, BW=21.6MiB/s (22.7MB/s)(108MiB/5002msec) 00:29:53.447 slat (nsec): min=5805, max=31959, avg=8565.52, stdev=2848.92 00:29:53.447 clat (usec): min=1109, max=5073, avg=2866.77, stdev=489.13 00:29:53.447 lat (usec): min=1115, max=5080, avg=2875.33, stdev=488.91 00:29:53.447 clat percentiles (usec): 00:29:53.447 | 1.00th=[ 1795], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2540], 00:29:53.447 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2868], 00:29:53.447 | 70.00th=[ 2933], 80.00th=[ 3195], 90.00th=[ 3621], 95.00th=[ 3851], 00:29:53.447 | 99.00th=[ 4359], 99.50th=[ 4424], 99.90th=[ 4686], 99.95th=[ 4817], 00:29:53.447 | 99.99th=[ 5080] 00:29:53.447 bw ( KiB/s): min=21312, max=23504, per=25.58%, avg=22137.60, stdev=651.37, samples=10 00:29:53.447 iops : min= 2664, max= 2938, avg=2767.20, stdev=81.42, samples=10 00:29:53.447 lat (msec) : 2=2.02%, 4=95.14%, 10=2.84% 00:29:53.447 cpu : usr=92.74%, sys=6.98%, ctx=6, majf=0, minf=36 00:29:53.447 IO depths : 1=0.2%, 2=2.2%, 4=69.2%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:53.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.447 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.447 issued rwts: total=13844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:53.447 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:53.447 filename1: (groupid=0, jobs=1): err= 0: pid=1712292: Wed Jul 24 19:30:39 2024 00:29:53.447 read: IOPS=2700, BW=21.1MiB/s (22.1MB/s)(106MiB/5002msec) 00:29:53.447 slat (usec): min=5, max=146, avg= 8.80, stdev= 3.11 00:29:53.447 clat (usec): min=1221, max=4715, avg=2939.16, stdev=469.36 00:29:53.447 lat (usec): min=1227, max=4721, avg=2947.96, stdev=469.30 00:29:53.447 clat percentiles (usec): 00:29:53.447 | 1.00th=[ 1729], 5.00th=[ 2245], 10.00th=[ 2442], 20.00th=[ 2606], 00:29:53.447 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2966], 00:29:53.447 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3556], 95.00th=[ 3884], 00:29:53.447 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 4555], 99.95th=[ 4621], 00:29:53.447 | 99.99th=[ 4686] 00:29:53.447 bw ( KiB/s): min=20480, max=22701, per=24.97%, avg=21609.30, stdev=636.00, samples=10 00:29:53.447 iops : min= 2560, max= 2837, avg=2701.10, stdev=79.38, samples=10 00:29:53.447 lat (msec) : 2=2.01%, 4=93.75%, 10=4.24% 00:29:53.447 cpu : usr=93.02%, sys=6.66%, ctx=7, majf=0, minf=61 00:29:53.447 IO depths : 1=0.1%, 2=1.7%, 4=67.7%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:53.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.447 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.447 issued rwts: total=13506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:53.447 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:53.447 filename1: (groupid=0, jobs=1): err= 0: pid=1712293: Wed Jul 24 19:30:39 2024 00:29:53.447 read: IOPS=2662, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:29:53.447 slat (nsec): min=5812, max=26203, avg=8770.19, stdev=2926.30 00:29:53.447 clat (usec): min=1093, max=44533, avg=2981.86, stdev=1105.80 00:29:53.447 lat (usec): min=1099, max=44558, avg=2990.63, stdev=1105.87 00:29:53.447 clat percentiles (usec): 00:29:53.447 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2638], 00:29:53.447 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2966], 00:29:53.447 | 70.00th=[ 3130], 80.00th=[ 3163], 90.00th=[ 3523], 95.00th=[ 3884], 00:29:53.447 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 5014], 99.95th=[44303], 00:29:53.447 | 99.99th=[44303] 00:29:53.447 bw ( KiB/s): min=19639, max=22192, per=24.60%, avg=21290.30, stdev=756.02, samples=10 00:29:53.447 iops : min= 2454, max= 2774, avg=2661.20, stdev=94.71, samples=10 00:29:53.447 lat (msec) : 2=0.50%, 4=95.77%, 10=3.67%, 50=0.06% 00:29:53.447 cpu : usr=92.84%, sys=6.86%, ctx=10, majf=0, minf=41 00:29:53.447 IO depths : 1=0.2%, 2=1.6%, 4=67.5%, 8=30.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:53.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.447 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.447 issued rwts: total=13315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:53.447 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:53.447 00:29:53.447 Run status group 0 (all jobs): 00:29:53.447 READ: bw=84.5MiB/s (88.6MB/s), 20.8MiB/s-21.6MiB/s (21.8MB/s-22.7MB/s), io=426MiB (447MB), run=5001-5044msec 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.447 00:29:53.447 real 0m24.499s 00:29:53.447 user 4m54.726s 00:29:53.447 sys 0m9.357s 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:53.447 19:30:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.447 ************************************ 00:29:53.447 END TEST fio_dif_rand_params 00:29:53.447 ************************************ 00:29:53.447 19:30:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:53.447 19:30:39 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:53.447 19:30:39 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:53.447 19:30:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:53.447 ************************************ 00:29:53.447 START TEST fio_dif_digest 00:29:53.447 ************************************ 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.447 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:53.447 bdev_null0 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:53.448 [2024-07-24 19:30:39.661540] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:53.448 { 00:29:53.448 "params": { 00:29:53.448 "name": "Nvme$subsystem", 00:29:53.448 "trtype": "$TEST_TRANSPORT", 00:29:53.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.448 "adrfam": "ipv4", 00:29:53.448 "trsvcid": "$NVMF_PORT", 00:29:53.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.448 "hdgst": ${hdgst:-false}, 00:29:53.448 "ddgst": ${ddgst:-false} 00:29:53.448 }, 00:29:53.448 "method": "bdev_nvme_attach_controller" 00:29:53.448 } 00:29:53.448 EOF 00:29:53.448 )") 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:53.448 19:30:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:53.448 "params": { 00:29:53.448 "name": "Nvme0", 00:29:53.448 "trtype": "tcp", 00:29:53.448 "traddr": "10.0.0.2", 00:29:53.448 "adrfam": "ipv4", 00:29:53.448 "trsvcid": "4420", 00:29:53.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:53.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:53.448 "hdgst": true, 00:29:53.448 "ddgst": true 00:29:53.448 }, 00:29:53.448 "method": "bdev_nvme_attach_controller" 00:29:53.448 }' 00:29:53.707 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:53.707 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:53.707 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.707 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:53.707 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:53.707 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:53.707 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:53.707 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:53.707 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:53.707 19:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:53.966 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:53.966 ... 00:29:53.966 fio-3.35 00:29:53.966 Starting 3 threads 00:29:53.966 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.179 00:30:06.179 filename0: (groupid=0, jobs=1): err= 0: pid=1713497: Wed Jul 24 19:30:50 2024 00:30:06.179 read: IOPS=276, BW=34.5MiB/s (36.2MB/s)(347MiB/10044msec) 00:30:06.179 slat (nsec): min=6055, max=30378, avg=10838.89, stdev=2222.33 00:30:06.179 clat (usec): min=6219, max=95916, avg=10837.36, stdev=3077.41 00:30:06.179 lat (usec): min=6230, max=95928, avg=10848.19, stdev=3077.51 00:30:06.179 clat percentiles (usec): 00:30:06.179 | 1.00th=[ 7242], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[10028], 00:30:06.179 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:30:06.179 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:30:06.179 | 99.00th=[13173], 99.50th=[14222], 99.90th=[53740], 99.95th=[53740], 00:30:06.179 | 99.99th=[95945] 00:30:06.179 bw ( KiB/s): min=29952, max=37888, per=33.82%, avg=35472.30, stdev=1667.54, samples=20 00:30:06.179 iops : min= 234, max= 296, avg=277.10, stdev=13.03, samples=20 00:30:06.179 lat (msec) : 10=18.82%, 20=80.82%, 50=0.04%, 100=0.32% 00:30:06.179 cpu : usr=91.11%, sys=8.59%, ctx=16, majf=0, minf=175 00:30:06.179 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:06.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.179 issued rwts: total=2773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.179 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:06.180 filename0: (groupid=0, jobs=1): err= 0: pid=1713498: Wed Jul 24 19:30:50 2024 00:30:06.180 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(346MiB/10044msec) 00:30:06.180 slat (nsec): min=6104, max=39009, avg=11050.02, stdev=2240.50 00:30:06.180 clat (usec): min=6462, max=54519, avg=10872.52, stdev=3744.91 00:30:06.180 lat (usec): min=6469, max=54531, avg=10883.57, stdev=3744.99 00:30:06.180 clat percentiles (usec): 00:30:06.180 | 1.00th=[ 7177], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[ 9896], 00:30:06.180 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:30:06.180 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:30:06.180 | 99.00th=[13435], 99.50th=[52167], 99.90th=[53740], 99.95th=[54264], 00:30:06.180 | 99.99th=[54264] 00:30:06.180 bw ( KiB/s): min=31744, max=37632, per=33.70%, avg=35353.60, stdev=1590.93, samples=20 00:30:06.180 iops : min= 248, max= 294, avg=276.20, stdev=12.43, samples=20 00:30:06.180 lat (msec) : 10=21.13%, 20=78.15%, 50=0.04%, 100=0.69% 00:30:06.180 cpu : usr=91.64%, sys=8.07%, ctx=15, majf=0, minf=155 00:30:06.180 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:06.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.180 issued rwts: total=2764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.180 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:06.180 filename0: (groupid=0, jobs=1): err= 0: pid=1713499: Wed Jul 24 19:30:50 2024 00:30:06.180 read: IOPS=268, BW=33.5MiB/s (35.2MB/s)(337MiB/10046msec) 00:30:06.180 slat (nsec): min=6099, max=26738, avg=11132.73, stdev=2132.23 00:30:06.180 clat (usec): min=6720, max=54386, avg=11148.38, stdev=5186.82 00:30:06.180 lat (usec): min=6727, max=54397, avg=11159.52, stdev=5186.88 00:30:06.180 clat percentiles (usec): 00:30:06.180 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[ 9896], 00:30:06.180 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:30:06.180 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:30:06.180 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53740], 99.95th=[54264], 00:30:06.180 | 99.99th=[54264] 00:30:06.180 bw ( KiB/s): min=30208, max=39680, per=32.87%, avg=34483.20, stdev=2451.39, samples=20 00:30:06.180 iops : min= 236, max= 310, avg=269.40, stdev=19.15, samples=20 00:30:06.180 lat (msec) : 10=23.89%, 20=74.59%, 50=0.04%, 100=1.48% 00:30:06.180 cpu : usr=91.32%, sys=8.37%, ctx=20, majf=0, minf=52 00:30:06.180 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:06.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.180 issued rwts: total=2696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.180 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:06.180 00:30:06.180 Run status group 0 (all jobs): 00:30:06.180 READ: bw=102MiB/s (107MB/s), 33.5MiB/s-34.5MiB/s (35.2MB/s-36.2MB/s), io=1029MiB (1079MB), run=10044-10046msec 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.180 00:30:06.180 real 0m11.097s 00:30:06.180 user 0m36.426s 00:30:06.180 sys 0m2.849s 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:06.180 19:30:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:06.180 ************************************ 00:30:06.180 END TEST fio_dif_digest 00:30:06.180 ************************************ 00:30:06.180 19:30:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:06.180 19:30:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:06.180 19:30:50 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:06.180 19:30:50 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:06.180 19:30:50 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:06.180 19:30:50 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:06.180 19:30:50 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:06.180 19:30:50 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:06.180 rmmod nvme_tcp 00:30:06.180 rmmod nvme_fabrics 00:30:06.180 rmmod nvme_keyring 00:30:06.180 19:30:50 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:06.180 19:30:50 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:06.180 19:30:50 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:06.180 19:30:50 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1704145 ']' 00:30:06.180 19:30:50 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1704145 00:30:06.180 19:30:50 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1704145 ']' 00:30:06.180 19:30:50 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1704145 00:30:06.180 19:30:50 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:30:06.180 19:30:50 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:06.180 19:30:50 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1704145 00:30:06.180 19:30:50 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:06.180 19:30:50 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:06.180 19:30:50 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1704145' 00:30:06.180 killing process with pid 1704145 00:30:06.180 19:30:50 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1704145 00:30:06.180 19:30:50 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1704145 00:30:06.180 19:30:51 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:06.180 19:30:51 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:08.082 Waiting for block devices as requested 00:30:08.082 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:08.082 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:08.082 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:08.082 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:08.082 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:08.082 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:08.341 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:08.341 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:08.341 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:08.341 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:08.600 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:08.600 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:08.600 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:08.859 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:08.859 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:08.859 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:09.118 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:09.118 19:30:55 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:09.118 19:30:55 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:09.118 19:30:55 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:09.118 19:30:55 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:09.118 19:30:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.118 19:30:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:09.118 19:30:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.653 19:30:57 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:11.653 00:30:11.653 real 1m16.533s 00:30:11.653 user 7m15.310s 00:30:11.653 sys 0m30.193s 00:30:11.653 19:30:57 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:11.653 19:30:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:11.653 ************************************ 00:30:11.653 END TEST nvmf_dif 00:30:11.653 ************************************ 00:30:11.653 19:30:57 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:11.653 19:30:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:11.653 19:30:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:11.653 19:30:57 -- common/autotest_common.sh@10 -- # set +x 00:30:11.653 ************************************ 00:30:11.653 START TEST nvmf_abort_qd_sizes 00:30:11.653 ************************************ 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:11.653 * Looking for test storage... 00:30:11.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:30:11.653 19:30:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:18.267 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:18.267 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:18.268 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:18.268 Found net devices under 0000:af:00.0: cvl_0_0 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:18.268 Found net devices under 0000:af:00.1: cvl_0_1 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:18.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:30:18.268 00:30:18.268 --- 10.0.0.2 ping statistics --- 00:30:18.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.268 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:30:18.268 00:30:18.268 --- 10.0.0.1 ping statistics --- 00:30:18.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.268 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:18.268 19:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:20.803 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:20.803 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:22.179 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1721499 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1721499 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1721499 ']' 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:22.179 19:31:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:22.438 [2024-07-24 19:31:08.462248] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:30:22.438 [2024-07-24 19:31:08.462303] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.438 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.438 [2024-07-24 19:31:08.539018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:22.438 [2024-07-24 19:31:08.609541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.438 [2024-07-24 19:31:08.609583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.438 [2024-07-24 19:31:08.609593] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.438 [2024-07-24 19:31:08.609601] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.438 [2024-07-24 19:31:08.609608] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.438 [2024-07-24 19:31:08.609661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.438 [2024-07-24 19:31:08.609759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.438 [2024-07-24 19:31:08.609810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.438 [2024-07-24 19:31:08.609812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:23.375 19:31:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:23.375 ************************************ 00:30:23.375 START TEST spdk_target_abort 00:30:23.375 ************************************ 00:30:23.375 19:31:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:30:23.375 19:31:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:23.375 19:31:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:30:23.375 19:31:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.375 19:31:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:26.663 spdk_targetn1 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:26.663 [2024-07-24 19:31:12.225651] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:26.663 [2024-07-24 19:31:12.257874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:26.663 19:31:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:26.663 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.954 Initializing NVMe Controllers 00:30:29.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:29.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:29.954 Initialization complete. Launching workers. 00:30:29.954 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11564, failed: 0 00:30:29.954 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1572, failed to submit 9992 00:30:29.954 success 825, unsuccess 747, failed 0 00:30:29.954 19:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:29.954 19:31:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:29.954 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.244 Initializing NVMe Controllers 00:30:33.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:33.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:33.244 Initialization complete. Launching workers. 00:30:33.244 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8721, failed: 0 00:30:33.244 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 7474 00:30:33.244 success 327, unsuccess 920, failed 0 00:30:33.244 19:31:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:33.244 19:31:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:33.244 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.777 Initializing NVMe Controllers 00:30:35.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:35.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:35.777 Initialization complete. Launching workers. 00:30:35.777 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38785, failed: 0 00:30:35.777 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2757, failed to submit 36028 00:30:35.777 success 574, unsuccess 2183, failed 0 00:30:35.777 19:31:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:35.777 19:31:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.777 19:31:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:35.777 19:31:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.777 19:31:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:35.777 19:31:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.777 19:31:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:37.710 19:31:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.710 19:31:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1721499 00:30:37.710 19:31:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1721499 ']' 00:30:37.710 19:31:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1721499 00:30:37.710 19:31:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:30:37.710 19:31:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:37.710 19:31:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1721499 00:30:37.710 19:31:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:37.710 19:31:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:37.710 19:31:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1721499' 00:30:37.710 killing process with pid 1721499 00:30:37.710 19:31:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1721499 00:30:37.710 19:31:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1721499 00:30:37.968 00:30:37.968 real 0m14.751s 00:30:37.968 user 0m58.280s 00:30:37.968 sys 0m2.899s 00:30:37.968 19:31:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:37.968 19:31:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:37.968 ************************************ 00:30:37.968 END TEST spdk_target_abort 00:30:37.968 ************************************ 00:30:37.968 19:31:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:37.968 19:31:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:37.968 19:31:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:37.968 19:31:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:37.968 ************************************ 00:30:37.968 START TEST kernel_target_abort 00:30:37.968 ************************************ 00:30:37.968 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:38.226 19:31:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:40.761 Waiting for block devices as requested 00:30:40.761 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:40.761 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:40.761 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:41.020 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:41.020 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:41.020 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:41.279 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:41.279 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:41.279 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:41.279 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:41.538 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:41.538 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:41.538 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:41.797 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:41.797 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:41.797 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:42.056 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:42.056 No valid GPT data, bailing 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:42.056 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:42.315 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:30:42.315 00:30:42.315 Discovery Log Number of Records 2, Generation counter 2 00:30:42.315 =====Discovery Log Entry 0====== 00:30:42.315 trtype: tcp 00:30:42.315 adrfam: ipv4 00:30:42.315 subtype: current discovery subsystem 00:30:42.315 treq: not specified, sq flow control disable supported 00:30:42.315 portid: 1 00:30:42.315 trsvcid: 4420 00:30:42.315 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:42.315 traddr: 10.0.0.1 00:30:42.315 eflags: none 00:30:42.315 sectype: none 00:30:42.315 =====Discovery Log Entry 1====== 00:30:42.315 trtype: tcp 00:30:42.315 adrfam: ipv4 00:30:42.315 subtype: nvme subsystem 00:30:42.315 treq: not specified, sq flow control disable supported 00:30:42.315 portid: 1 00:30:42.315 trsvcid: 4420 00:30:42.315 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:42.315 traddr: 10.0.0.1 00:30:42.315 eflags: none 00:30:42.316 sectype: none 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:42.316 19:31:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:42.316 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.605 Initializing NVMe Controllers 00:30:45.605 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:45.605 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:45.605 Initialization complete. Launching workers. 00:30:45.605 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74618, failed: 0 00:30:45.605 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 74618, failed to submit 0 00:30:45.605 success 0, unsuccess 74618, failed 0 00:30:45.605 19:31:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:45.605 19:31:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:45.605 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.889 Initializing NVMe Controllers 00:30:48.889 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:48.889 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:48.889 Initialization complete. Launching workers. 00:30:48.889 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 130490, failed: 0 00:30:48.889 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32778, failed to submit 97712 00:30:48.889 success 0, unsuccess 32778, failed 0 00:30:48.889 19:31:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:48.889 19:31:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:48.889 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.419 Initializing NVMe Controllers 00:30:51.419 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:51.419 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:51.419 Initialization complete. Launching workers. 00:30:51.419 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 125891, failed: 0 00:30:51.419 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31462, failed to submit 94429 00:30:51.419 success 0, unsuccess 31462, failed 0 00:30:51.677 19:31:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:51.677 19:31:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:51.677 19:31:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:51.677 19:31:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:51.677 19:31:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:51.677 19:31:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:51.677 19:31:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:51.677 19:31:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:51.677 19:31:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:51.677 19:31:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:54.964 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:54.964 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:56.904 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:30:56.904 00:30:56.904 real 0m18.546s 00:30:56.904 user 0m7.798s 00:30:56.904 sys 0m5.575s 00:30:56.904 19:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:56.904 19:31:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:56.904 ************************************ 00:30:56.904 END TEST kernel_target_abort 00:30:56.904 ************************************ 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:56.904 rmmod nvme_tcp 00:30:56.904 rmmod nvme_fabrics 00:30:56.904 rmmod nvme_keyring 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1721499 ']' 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1721499 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1721499 ']' 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1721499 00:30:56.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1721499) - No such process 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1721499 is not found' 00:30:56.904 Process with pid 1721499 is not found 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:56.904 19:31:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:59.437 Waiting for block devices as requested 00:30:59.696 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:59.696 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:59.696 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:59.955 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:59.955 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:59.955 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:59.955 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:00.214 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:00.214 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:00.214 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:00.473 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:00.473 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:00.473 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:00.473 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:00.732 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:00.732 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:00.732 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:00.991 19:31:47 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:00.991 19:31:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:00.991 19:31:47 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:00.991 19:31:47 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:00.991 19:31:47 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.991 19:31:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:00.991 19:31:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.525 19:31:49 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:03.525 00:31:03.525 real 0m51.762s 00:31:03.525 user 1m10.144s 00:31:03.525 sys 0m17.868s 00:31:03.525 19:31:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:03.525 19:31:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:03.525 ************************************ 00:31:03.525 END TEST nvmf_abort_qd_sizes 00:31:03.525 ************************************ 00:31:03.525 19:31:49 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:03.525 19:31:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:03.525 19:31:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:03.525 19:31:49 -- common/autotest_common.sh@10 -- # set +x 00:31:03.525 ************************************ 00:31:03.525 START TEST keyring_file 00:31:03.525 ************************************ 00:31:03.525 19:31:49 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:03.525 * Looking for test storage... 00:31:03.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:03.525 19:31:49 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:03.525 19:31:49 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.525 19:31:49 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.525 19:31:49 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.525 19:31:49 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.525 19:31:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.525 19:31:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.525 19:31:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.525 19:31:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:03.525 19:31:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:03.525 19:31:49 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:03.525 19:31:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:03.525 19:31:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:03.525 19:31:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:03.525 19:31:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:03.525 19:31:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:03.525 19:31:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:03.525 19:31:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:03.525 19:31:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:03.525 19:31:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:03.525 19:31:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:03.525 19:31:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:03.525 19:31:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:03.525 19:31:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yFd1Rr0LPL 00:31:03.526 19:31:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:03.526 19:31:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:03.526 19:31:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:03.526 19:31:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:03.526 19:31:49 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:03.526 19:31:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:03.526 19:31:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:03.526 19:31:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yFd1Rr0LPL 00:31:03.526 19:31:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yFd1Rr0LPL 00:31:03.526 19:31:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.yFd1Rr0LPL 00:31:03.526 19:31:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:03.526 19:31:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:03.526 19:31:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:03.526 19:31:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:03.526 19:31:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:03.526 19:31:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:03.526 19:31:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FQ6kw70PjG 00:31:03.526 19:31:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:03.526 19:31:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:03.526 19:31:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:03.526 19:31:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:03.526 19:31:49 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:03.526 19:31:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:03.526 19:31:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:03.526 19:31:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FQ6kw70PjG 00:31:03.526 19:31:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FQ6kw70PjG 00:31:03.526 19:31:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.FQ6kw70PjG 00:31:03.526 19:31:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=1730843 00:31:03.526 19:31:49 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:03.526 19:31:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1730843 00:31:03.526 19:31:49 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1730843 ']' 00:31:03.526 19:31:49 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.526 19:31:49 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:03.526 19:31:49 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.526 19:31:49 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:03.526 19:31:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:03.526 [2024-07-24 19:31:49.577497] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:31:03.526 [2024-07-24 19:31:49.577552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730843 ] 00:31:03.526 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.526 [2024-07-24 19:31:49.646675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.526 [2024-07-24 19:31:49.724972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:04.468 19:31:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:04.468 [2024-07-24 19:31:50.377834] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.468 null0 00:31:04.468 [2024-07-24 19:31:50.409875] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:04.468 [2024-07-24 19:31:50.410179] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:04.468 [2024-07-24 19:31:50.417885] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.468 19:31:50 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:04.468 [2024-07-24 19:31:50.429915] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:04.468 request: 00:31:04.468 { 00:31:04.468 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:04.468 "secure_channel": false, 00:31:04.468 "listen_address": { 00:31:04.468 "trtype": "tcp", 00:31:04.468 "traddr": "127.0.0.1", 00:31:04.468 "trsvcid": "4420" 00:31:04.468 }, 00:31:04.468 "method": "nvmf_subsystem_add_listener", 00:31:04.468 "req_id": 1 00:31:04.468 } 00:31:04.468 Got JSON-RPC error response 00:31:04.468 response: 00:31:04.468 { 00:31:04.468 "code": -32602, 00:31:04.468 "message": "Invalid parameters" 00:31:04.468 } 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:04.468 19:31:50 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:04.468 19:31:50 keyring_file -- keyring/file.sh@46 -- # bperfpid=1730901 00:31:04.468 19:31:50 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1730901 /var/tmp/bperf.sock 00:31:04.469 19:31:50 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:04.469 19:31:50 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1730901 ']' 00:31:04.469 19:31:50 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:04.469 19:31:50 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:04.469 19:31:50 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:04.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:04.469 19:31:50 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:04.469 19:31:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:04.469 [2024-07-24 19:31:50.487751] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:31:04.469 [2024-07-24 19:31:50.487799] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730901 ] 00:31:04.469 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.469 [2024-07-24 19:31:50.556990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.469 [2024-07-24 19:31:50.631269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.404 19:31:51 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:05.404 19:31:51 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:05.404 19:31:51 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yFd1Rr0LPL 00:31:05.404 19:31:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yFd1Rr0LPL 00:31:05.404 19:31:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FQ6kw70PjG 00:31:05.404 19:31:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FQ6kw70PjG 00:31:05.662 19:31:51 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:05.662 19:31:51 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:05.662 19:31:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:05.662 19:31:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:05.662 19:31:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:05.662 19:31:51 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.yFd1Rr0LPL == \/\t\m\p\/\t\m\p\.\y\F\d\1\R\r\0\L\P\L ]] 00:31:05.662 19:31:51 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:05.662 19:31:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:05.662 19:31:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:05.662 19:31:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:05.662 19:31:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:05.920 19:31:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FQ6kw70PjG == \/\t\m\p\/\t\m\p\.\F\Q\6\k\w\7\0\P\j\G ]] 00:31:05.920 19:31:51 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:05.920 19:31:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:05.920 19:31:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:05.920 19:31:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:05.920 19:31:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:05.920 19:31:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.179 19:31:52 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:06.179 19:31:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:06.179 19:31:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:06.179 19:31:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:06.179 19:31:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:06.179 19:31:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.179 19:31:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:06.179 19:31:52 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:06.179 19:31:52 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:06.179 19:31:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:06.436 [2024-07-24 19:31:52.495864] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:06.436 nvme0n1 00:31:06.436 19:31:52 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:06.436 19:31:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:06.436 19:31:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:06.436 19:31:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:06.436 19:31:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:06.436 19:31:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.694 19:31:52 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:06.695 19:31:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:06.695 19:31:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:06.695 19:31:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:06.695 19:31:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:06.695 19:31:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:06.695 19:31:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.953 19:31:52 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:06.953 19:31:52 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:06.953 Running I/O for 1 seconds... 00:31:07.887 00:31:07.887 Latency(us) 00:31:07.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.887 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:07.887 nvme0n1 : 1.01 13909.35 54.33 0.00 0.00 9170.71 5216.67 15728.64 00:31:07.887 =================================================================================================================== 00:31:07.887 Total : 13909.35 54.33 0.00 0.00 9170.71 5216.67 15728.64 00:31:07.887 0 00:31:07.887 19:31:54 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:07.887 19:31:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:08.146 19:31:54 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:08.146 19:31:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:08.146 19:31:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:08.146 19:31:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:08.146 19:31:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:08.146 19:31:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:08.404 19:31:54 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:08.404 19:31:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:08.404 19:31:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:08.404 19:31:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:08.404 19:31:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:08.404 19:31:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:08.404 19:31:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:08.404 19:31:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:08.404 19:31:54 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:08.404 19:31:54 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:08.404 19:31:54 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:08.404 19:31:54 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:08.404 19:31:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.404 19:31:54 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:08.404 19:31:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.404 19:31:54 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:08.404 19:31:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:08.663 [2024-07-24 19:31:54.754734] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:08.663 [2024-07-24 19:31:54.755337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dab7c0 (107): Transport endpoint is not connected 00:31:08.663 [2024-07-24 19:31:54.756333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dab7c0 (9): Bad file descriptor 00:31:08.663 [2024-07-24 19:31:54.757332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.663 [2024-07-24 19:31:54.757345] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:08.663 [2024-07-24 19:31:54.757355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.663 request: 00:31:08.663 { 00:31:08.663 "name": "nvme0", 00:31:08.663 "trtype": "tcp", 00:31:08.663 "traddr": "127.0.0.1", 00:31:08.663 "adrfam": "ipv4", 00:31:08.663 "trsvcid": "4420", 00:31:08.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:08.663 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:08.663 "prchk_reftag": false, 00:31:08.663 "prchk_guard": false, 00:31:08.663 "hdgst": false, 00:31:08.663 "ddgst": false, 00:31:08.663 "psk": "key1", 00:31:08.663 "method": "bdev_nvme_attach_controller", 00:31:08.663 "req_id": 1 00:31:08.663 } 00:31:08.663 Got JSON-RPC error response 00:31:08.663 response: 00:31:08.663 { 00:31:08.663 "code": -5, 00:31:08.663 "message": "Input/output error" 00:31:08.663 } 00:31:08.663 19:31:54 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:08.663 19:31:54 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:08.663 19:31:54 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:08.663 19:31:54 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:08.663 19:31:54 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:08.663 19:31:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:08.663 19:31:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:08.663 19:31:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:08.663 19:31:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:08.663 19:31:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:08.921 19:31:54 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:08.921 19:31:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:08.921 19:31:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:08.921 19:31:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:08.921 19:31:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:08.921 19:31:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:08.921 19:31:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:08.922 19:31:55 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:08.922 19:31:55 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:08.922 19:31:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:09.180 19:31:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:09.180 19:31:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:09.439 19:31:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:09.439 19:31:55 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:09.439 19:31:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:09.439 19:31:55 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:09.439 19:31:55 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.yFd1Rr0LPL 00:31:09.439 19:31:55 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.yFd1Rr0LPL 00:31:09.439 19:31:55 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:09.439 19:31:55 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.yFd1Rr0LPL 00:31:09.439 19:31:55 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:09.439 19:31:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:09.439 19:31:55 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:09.439 19:31:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:09.439 19:31:55 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yFd1Rr0LPL 00:31:09.439 19:31:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yFd1Rr0LPL 00:31:09.698 [2024-07-24 19:31:55.835438] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yFd1Rr0LPL': 0100660 00:31:09.698 [2024-07-24 19:31:55.835465] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:09.698 request: 00:31:09.698 { 00:31:09.698 "name": "key0", 00:31:09.698 "path": "/tmp/tmp.yFd1Rr0LPL", 00:31:09.698 "method": "keyring_file_add_key", 00:31:09.698 "req_id": 1 00:31:09.698 } 00:31:09.698 Got JSON-RPC error response 00:31:09.698 response: 00:31:09.698 { 00:31:09.698 "code": -1, 00:31:09.698 "message": "Operation not permitted" 00:31:09.698 } 00:31:09.698 19:31:55 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:09.698 19:31:55 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:09.698 19:31:55 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:09.698 19:31:55 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:09.698 19:31:55 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.yFd1Rr0LPL 00:31:09.698 19:31:55 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yFd1Rr0LPL 00:31:09.698 19:31:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yFd1Rr0LPL 00:31:09.957 19:31:56 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.yFd1Rr0LPL 00:31:09.957 19:31:56 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:09.957 19:31:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:09.957 19:31:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:09.957 19:31:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:09.957 19:31:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:09.957 19:31:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:09.957 19:31:56 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:09.957 19:31:56 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:09.957 19:31:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:09.957 19:31:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:09.957 19:31:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:09.957 19:31:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:09.957 19:31:56 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:10.217 19:31:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:10.217 19:31:56 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:10.217 19:31:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:10.217 [2024-07-24 19:31:56.340793] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.yFd1Rr0LPL': No such file or directory 00:31:10.217 [2024-07-24 19:31:56.340812] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:10.217 [2024-07-24 19:31:56.340833] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:10.217 [2024-07-24 19:31:56.340841] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:10.217 [2024-07-24 19:31:56.340849] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:10.217 request: 00:31:10.217 { 00:31:10.217 "name": "nvme0", 00:31:10.217 "trtype": "tcp", 00:31:10.217 "traddr": "127.0.0.1", 00:31:10.217 "adrfam": "ipv4", 00:31:10.217 "trsvcid": "4420", 00:31:10.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.217 "prchk_reftag": false, 00:31:10.217 "prchk_guard": false, 00:31:10.217 "hdgst": false, 00:31:10.217 "ddgst": false, 00:31:10.217 "psk": "key0", 00:31:10.217 "method": "bdev_nvme_attach_controller", 00:31:10.217 "req_id": 1 00:31:10.217 } 00:31:10.217 Got JSON-RPC error response 00:31:10.217 response: 00:31:10.217 { 00:31:10.217 "code": -19, 00:31:10.217 "message": "No such device" 00:31:10.217 } 00:31:10.217 19:31:56 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:10.217 19:31:56 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:10.217 19:31:56 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:10.217 19:31:56 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:10.217 19:31:56 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:10.217 19:31:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:10.476 19:31:56 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:10.476 19:31:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:10.476 19:31:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:10.476 19:31:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:10.476 19:31:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:10.476 19:31:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:10.476 19:31:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cnZrdojRft 00:31:10.476 19:31:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:10.476 19:31:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:10.476 19:31:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:10.476 19:31:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:10.476 19:31:56 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:10.476 19:31:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:10.476 19:31:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:10.476 19:31:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cnZrdojRft 00:31:10.476 19:31:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cnZrdojRft 00:31:10.476 19:31:56 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.cnZrdojRft 00:31:10.476 19:31:56 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cnZrdojRft 00:31:10.476 19:31:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cnZrdojRft 00:31:10.734 19:31:56 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:10.734 19:31:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:10.734 nvme0n1 00:31:10.734 19:31:56 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:10.734 19:31:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:10.734 19:31:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:10.734 19:31:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:10.734 19:31:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:10.734 19:31:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:10.993 19:31:57 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:10.993 19:31:57 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:10.993 19:31:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:11.251 19:31:57 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:11.251 19:31:57 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:11.252 19:31:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:11.252 19:31:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:11.252 19:31:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:11.252 19:31:57 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:11.252 19:31:57 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:11.252 19:31:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:11.252 19:31:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:11.252 19:31:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:11.252 19:31:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:11.252 19:31:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:11.589 19:31:57 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:11.589 19:31:57 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:11.589 19:31:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:11.847 19:31:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:11.847 19:31:57 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:11.847 19:31:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:11.847 19:31:58 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:11.848 19:31:58 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cnZrdojRft 00:31:11.848 19:31:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cnZrdojRft 00:31:12.106 19:31:58 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FQ6kw70PjG 00:31:12.106 19:31:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FQ6kw70PjG 00:31:12.365 19:31:58 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:12.365 19:31:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:12.365 nvme0n1 00:31:12.623 19:31:58 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:12.623 19:31:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:12.623 19:31:58 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:12.623 "subsystems": [ 00:31:12.623 { 00:31:12.623 "subsystem": "keyring", 00:31:12.623 "config": [ 00:31:12.623 { 00:31:12.623 "method": "keyring_file_add_key", 00:31:12.623 "params": { 00:31:12.623 "name": "key0", 00:31:12.623 "path": "/tmp/tmp.cnZrdojRft" 00:31:12.623 } 00:31:12.623 }, 00:31:12.623 { 00:31:12.623 "method": "keyring_file_add_key", 00:31:12.623 "params": { 00:31:12.623 "name": "key1", 00:31:12.623 "path": "/tmp/tmp.FQ6kw70PjG" 00:31:12.623 } 00:31:12.623 } 00:31:12.623 ] 00:31:12.623 }, 00:31:12.623 { 00:31:12.623 "subsystem": "iobuf", 00:31:12.623 "config": [ 00:31:12.623 { 00:31:12.623 "method": "iobuf_set_options", 00:31:12.623 "params": { 00:31:12.623 "small_pool_count": 8192, 00:31:12.623 "large_pool_count": 1024, 00:31:12.623 "small_bufsize": 8192, 00:31:12.623 "large_bufsize": 135168 00:31:12.623 } 00:31:12.623 } 00:31:12.623 ] 00:31:12.623 }, 00:31:12.623 { 00:31:12.623 "subsystem": "sock", 00:31:12.623 "config": [ 00:31:12.623 { 00:31:12.623 "method": "sock_set_default_impl", 00:31:12.623 "params": { 00:31:12.623 "impl_name": "posix" 00:31:12.623 } 00:31:12.623 }, 00:31:12.623 { 00:31:12.623 "method": "sock_impl_set_options", 00:31:12.623 "params": { 00:31:12.623 "impl_name": "ssl", 00:31:12.623 "recv_buf_size": 4096, 00:31:12.623 "send_buf_size": 4096, 00:31:12.623 "enable_recv_pipe": true, 00:31:12.623 "enable_quickack": false, 00:31:12.623 "enable_placement_id": 0, 00:31:12.623 "enable_zerocopy_send_server": true, 00:31:12.623 "enable_zerocopy_send_client": false, 00:31:12.623 "zerocopy_threshold": 0, 00:31:12.623 "tls_version": 0, 00:31:12.623 "enable_ktls": false 00:31:12.623 } 00:31:12.623 }, 00:31:12.623 { 00:31:12.623 "method": "sock_impl_set_options", 00:31:12.623 "params": { 00:31:12.623 "impl_name": "posix", 00:31:12.623 "recv_buf_size": 2097152, 00:31:12.623 "send_buf_size": 2097152, 00:31:12.623 "enable_recv_pipe": true, 00:31:12.623 "enable_quickack": false, 00:31:12.623 "enable_placement_id": 0, 00:31:12.623 "enable_zerocopy_send_server": true, 00:31:12.623 "enable_zerocopy_send_client": false, 00:31:12.623 "zerocopy_threshold": 0, 00:31:12.623 "tls_version": 0, 00:31:12.623 "enable_ktls": false 00:31:12.623 } 00:31:12.623 } 00:31:12.623 ] 00:31:12.623 }, 00:31:12.623 { 00:31:12.623 "subsystem": "vmd", 00:31:12.623 "config": [] 00:31:12.623 }, 00:31:12.623 { 00:31:12.623 "subsystem": "accel", 00:31:12.623 "config": [ 00:31:12.623 { 00:31:12.623 "method": "accel_set_options", 00:31:12.623 "params": { 00:31:12.623 "small_cache_size": 128, 00:31:12.623 "large_cache_size": 16, 00:31:12.623 "task_count": 2048, 00:31:12.623 "sequence_count": 2048, 00:31:12.623 "buf_count": 2048 00:31:12.623 } 00:31:12.623 } 00:31:12.623 ] 00:31:12.623 }, 00:31:12.623 { 00:31:12.623 "subsystem": "bdev", 00:31:12.623 "config": [ 00:31:12.623 { 00:31:12.623 "method": "bdev_set_options", 00:31:12.623 "params": { 00:31:12.623 "bdev_io_pool_size": 65535, 00:31:12.623 "bdev_io_cache_size": 256, 00:31:12.623 "bdev_auto_examine": true, 00:31:12.623 "iobuf_small_cache_size": 128, 00:31:12.623 "iobuf_large_cache_size": 16 00:31:12.623 } 00:31:12.623 }, 00:31:12.623 { 00:31:12.623 "method": "bdev_raid_set_options", 00:31:12.623 "params": { 00:31:12.623 "process_window_size_kb": 1024, 00:31:12.623 "process_max_bandwidth_mb_sec": 0 00:31:12.623 } 00:31:12.623 }, 00:31:12.623 { 00:31:12.623 "method": "bdev_iscsi_set_options", 00:31:12.623 "params": { 00:31:12.623 "timeout_sec": 30 00:31:12.623 } 00:31:12.623 }, 00:31:12.623 { 00:31:12.623 "method": "bdev_nvme_set_options", 00:31:12.623 "params": { 00:31:12.623 "action_on_timeout": "none", 00:31:12.623 "timeout_us": 0, 00:31:12.623 "timeout_admin_us": 0, 00:31:12.623 "keep_alive_timeout_ms": 10000, 00:31:12.623 "arbitration_burst": 0, 00:31:12.623 "low_priority_weight": 0, 00:31:12.623 "medium_priority_weight": 0, 00:31:12.623 "high_priority_weight": 0, 00:31:12.623 "nvme_adminq_poll_period_us": 10000, 00:31:12.623 "nvme_ioq_poll_period_us": 0, 00:31:12.623 "io_queue_requests": 512, 00:31:12.623 "delay_cmd_submit": true, 00:31:12.623 "transport_retry_count": 4, 00:31:12.623 "bdev_retry_count": 3, 00:31:12.623 "transport_ack_timeout": 0, 00:31:12.623 "ctrlr_loss_timeout_sec": 0, 00:31:12.623 "reconnect_delay_sec": 0, 00:31:12.623 "fast_io_fail_timeout_sec": 0, 00:31:12.623 "disable_auto_failback": false, 00:31:12.623 "generate_uuids": false, 00:31:12.623 "transport_tos": 0, 00:31:12.623 "nvme_error_stat": false, 00:31:12.623 "rdma_srq_size": 0, 00:31:12.623 "io_path_stat": false, 00:31:12.623 "allow_accel_sequence": false, 00:31:12.623 "rdma_max_cq_size": 0, 00:31:12.623 "rdma_cm_event_timeout_ms": 0, 00:31:12.623 "dhchap_digests": [ 00:31:12.623 "sha256", 00:31:12.623 "sha384", 00:31:12.623 "sha512" 00:31:12.623 ], 00:31:12.623 "dhchap_dhgroups": [ 00:31:12.623 "null", 00:31:12.623 "ffdhe2048", 00:31:12.623 "ffdhe3072", 00:31:12.623 "ffdhe4096", 00:31:12.623 "ffdhe6144", 00:31:12.623 "ffdhe8192" 00:31:12.623 ] 00:31:12.623 } 00:31:12.624 }, 00:31:12.624 { 00:31:12.624 "method": "bdev_nvme_attach_controller", 00:31:12.624 "params": { 00:31:12.624 "name": "nvme0", 00:31:12.624 "trtype": "TCP", 00:31:12.624 "adrfam": "IPv4", 00:31:12.624 "traddr": "127.0.0.1", 00:31:12.624 "trsvcid": "4420", 00:31:12.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:12.624 "prchk_reftag": false, 00:31:12.624 "prchk_guard": false, 00:31:12.624 "ctrlr_loss_timeout_sec": 0, 00:31:12.624 "reconnect_delay_sec": 0, 00:31:12.624 "fast_io_fail_timeout_sec": 0, 00:31:12.624 "psk": "key0", 00:31:12.624 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:12.624 "hdgst": false, 00:31:12.624 "ddgst": false 00:31:12.624 } 00:31:12.624 }, 00:31:12.624 { 00:31:12.624 "method": "bdev_nvme_set_hotplug", 00:31:12.624 "params": { 00:31:12.624 "period_us": 100000, 00:31:12.624 "enable": false 00:31:12.624 } 00:31:12.624 }, 00:31:12.624 { 00:31:12.624 "method": "bdev_wait_for_examine" 00:31:12.624 } 00:31:12.624 ] 00:31:12.624 }, 00:31:12.624 { 00:31:12.624 "subsystem": "nbd", 00:31:12.624 "config": [] 00:31:12.624 } 00:31:12.624 ] 00:31:12.624 }' 00:31:12.624 19:31:58 keyring_file -- keyring/file.sh@114 -- # killprocess 1730901 00:31:12.624 19:31:58 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1730901 ']' 00:31:12.624 19:31:58 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1730901 00:31:12.624 19:31:58 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:12.624 19:31:58 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:12.624 19:31:58 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1730901 00:31:12.882 19:31:58 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:12.882 19:31:58 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:12.882 19:31:58 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1730901' 00:31:12.882 killing process with pid 1730901 00:31:12.882 19:31:58 keyring_file -- common/autotest_common.sh@969 -- # kill 1730901 00:31:12.882 Received shutdown signal, test time was about 1.000000 seconds 00:31:12.882 00:31:12.882 Latency(us) 00:31:12.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.882 =================================================================================================================== 00:31:12.882 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:12.882 19:31:58 keyring_file -- common/autotest_common.sh@974 -- # wait 1730901 00:31:12.882 19:31:59 keyring_file -- keyring/file.sh@117 -- # bperfpid=1732450 00:31:12.882 19:31:59 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1732450 /var/tmp/bperf.sock 00:31:12.882 19:31:59 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1732450 ']' 00:31:12.882 19:31:59 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:12.882 19:31:59 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:12.882 19:31:59 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:12.882 19:31:59 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:12.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:12.882 19:31:59 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:12.882 19:31:59 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:12.882 "subsystems": [ 00:31:12.882 { 00:31:12.882 "subsystem": "keyring", 00:31:12.882 "config": [ 00:31:12.882 { 00:31:12.882 "method": "keyring_file_add_key", 00:31:12.882 "params": { 00:31:12.882 "name": "key0", 00:31:12.882 "path": "/tmp/tmp.cnZrdojRft" 00:31:12.882 } 00:31:12.882 }, 00:31:12.882 { 00:31:12.883 "method": "keyring_file_add_key", 00:31:12.883 "params": { 00:31:12.883 "name": "key1", 00:31:12.883 "path": "/tmp/tmp.FQ6kw70PjG" 00:31:12.883 } 00:31:12.883 } 00:31:12.883 ] 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "subsystem": "iobuf", 00:31:12.883 "config": [ 00:31:12.883 { 00:31:12.883 "method": "iobuf_set_options", 00:31:12.883 "params": { 00:31:12.883 "small_pool_count": 8192, 00:31:12.883 "large_pool_count": 1024, 00:31:12.883 "small_bufsize": 8192, 00:31:12.883 "large_bufsize": 135168 00:31:12.883 } 00:31:12.883 } 00:31:12.883 ] 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "subsystem": "sock", 00:31:12.883 "config": [ 00:31:12.883 { 00:31:12.883 "method": "sock_set_default_impl", 00:31:12.883 "params": { 00:31:12.883 "impl_name": "posix" 00:31:12.883 } 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "method": "sock_impl_set_options", 00:31:12.883 "params": { 00:31:12.883 "impl_name": "ssl", 00:31:12.883 "recv_buf_size": 4096, 00:31:12.883 "send_buf_size": 4096, 00:31:12.883 "enable_recv_pipe": true, 00:31:12.883 "enable_quickack": false, 00:31:12.883 "enable_placement_id": 0, 00:31:12.883 "enable_zerocopy_send_server": true, 00:31:12.883 "enable_zerocopy_send_client": false, 00:31:12.883 "zerocopy_threshold": 0, 00:31:12.883 "tls_version": 0, 00:31:12.883 "enable_ktls": false 00:31:12.883 } 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "method": "sock_impl_set_options", 00:31:12.883 "params": { 00:31:12.883 "impl_name": "posix", 00:31:12.883 "recv_buf_size": 2097152, 00:31:12.883 "send_buf_size": 2097152, 00:31:12.883 "enable_recv_pipe": true, 00:31:12.883 "enable_quickack": false, 00:31:12.883 "enable_placement_id": 0, 00:31:12.883 "enable_zerocopy_send_server": true, 00:31:12.883 "enable_zerocopy_send_client": false, 00:31:12.883 "zerocopy_threshold": 0, 00:31:12.883 "tls_version": 0, 00:31:12.883 "enable_ktls": false 00:31:12.883 } 00:31:12.883 } 00:31:12.883 ] 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "subsystem": "vmd", 00:31:12.883 "config": [] 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "subsystem": "accel", 00:31:12.883 "config": [ 00:31:12.883 { 00:31:12.883 "method": "accel_set_options", 00:31:12.883 "params": { 00:31:12.883 "small_cache_size": 128, 00:31:12.883 "large_cache_size": 16, 00:31:12.883 "task_count": 2048, 00:31:12.883 "sequence_count": 2048, 00:31:12.883 "buf_count": 2048 00:31:12.883 } 00:31:12.883 } 00:31:12.883 ] 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "subsystem": "bdev", 00:31:12.883 "config": [ 00:31:12.883 { 00:31:12.883 "method": "bdev_set_options", 00:31:12.883 "params": { 00:31:12.883 "bdev_io_pool_size": 65535, 00:31:12.883 "bdev_io_cache_size": 256, 00:31:12.883 "bdev_auto_examine": true, 00:31:12.883 "iobuf_small_cache_size": 128, 00:31:12.883 "iobuf_large_cache_size": 16 00:31:12.883 } 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "method": "bdev_raid_set_options", 00:31:12.883 "params": { 00:31:12.883 "process_window_size_kb": 1024, 00:31:12.883 "process_max_bandwidth_mb_sec": 0 00:31:12.883 } 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "method": "bdev_iscsi_set_options", 00:31:12.883 "params": { 00:31:12.883 "timeout_sec": 30 00:31:12.883 } 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "method": "bdev_nvme_set_options", 00:31:12.883 "params": { 00:31:12.883 "action_on_timeout": "none", 00:31:12.883 "timeout_us": 0, 00:31:12.883 "timeout_admin_us": 0, 00:31:12.883 "keep_alive_timeout_ms": 10000, 00:31:12.883 "arbitration_burst": 0, 00:31:12.883 "low_priority_weight": 0, 00:31:12.883 "medium_priority_weight": 0, 00:31:12.883 "high_priority_weight": 0, 00:31:12.883 "nvme_adminq_poll_period_us": 10000, 00:31:12.883 "nvme_ioq_poll_period_us": 0, 00:31:12.883 "io_queue_requests": 512, 00:31:12.883 "delay_cmd_submit": true, 00:31:12.883 "transport_retry_count": 4, 00:31:12.883 "bdev_retry_count": 3, 00:31:12.883 "transport_ack_timeout": 0, 00:31:12.883 "ctrlr_loss_timeout_sec": 0, 00:31:12.883 "reconnect_delay_sec": 0, 00:31:12.883 "fast_io_fail_timeout_sec": 0, 00:31:12.883 "disable_auto_failback": false, 00:31:12.883 "generate_uuids": false, 00:31:12.883 "transport_tos": 0, 00:31:12.883 "nvme_error_stat": false, 00:31:12.883 "rdma_srq_size": 0, 00:31:12.883 "io_path_stat": false, 00:31:12.883 "allow_accel_sequence": false, 00:31:12.883 "rdma_max_cq_size": 0, 00:31:12.883 "rdma_cm_event_timeout_ms": 0, 00:31:12.883 "dhchap_digests": [ 00:31:12.883 "sha256", 00:31:12.883 "sha384", 00:31:12.883 "sha512" 00:31:12.883 ], 00:31:12.883 "dhchap_dhgroups": [ 00:31:12.883 "null", 00:31:12.883 "ffdhe2048", 00:31:12.883 "ffdhe3072", 00:31:12.883 "ffdhe4096", 00:31:12.883 "ffdhe6144", 00:31:12.883 "ffdhe8192" 00:31:12.883 ] 00:31:12.883 } 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "method": "bdev_nvme_attach_controller", 00:31:12.883 "params": { 00:31:12.883 "name": "nvme0", 00:31:12.883 "trtype": "TCP", 00:31:12.883 "adrfam": "IPv4", 00:31:12.883 "traddr": "127.0.0.1", 00:31:12.883 "trsvcid": "4420", 00:31:12.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:12.883 "prchk_reftag": false, 00:31:12.883 "prchk_guard": false, 00:31:12.883 "ctrlr_loss_timeout_sec": 0, 00:31:12.883 "reconnect_delay_sec": 0, 00:31:12.883 "fast_io_fail_timeout_sec": 0, 00:31:12.883 "psk": "key0", 00:31:12.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:12.883 "hdgst": false, 00:31:12.883 "ddgst": false 00:31:12.883 } 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "method": "bdev_nvme_set_hotplug", 00:31:12.883 "params": { 00:31:12.883 "period_us": 100000, 00:31:12.883 "enable": false 00:31:12.883 } 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "method": "bdev_wait_for_examine" 00:31:12.883 } 00:31:12.883 ] 00:31:12.883 }, 00:31:12.883 { 00:31:12.883 "subsystem": "nbd", 00:31:12.883 "config": [] 00:31:12.883 } 00:31:12.883 ] 00:31:12.883 }' 00:31:12.883 19:31:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:12.883 [2024-07-24 19:31:59.110072] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:31:12.883 [2024-07-24 19:31:59.110127] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1732450 ] 00:31:13.141 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.141 [2024-07-24 19:31:59.179185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.141 [2024-07-24 19:31:59.253819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.399 [2024-07-24 19:31:59.411755] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:13.964 19:31:59 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:13.964 19:31:59 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:13.964 19:31:59 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:13.964 19:31:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:13.964 19:31:59 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:13.964 19:32:00 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:13.964 19:32:00 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:13.964 19:32:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:13.964 19:32:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:13.964 19:32:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:13.964 19:32:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:13.964 19:32:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:14.222 19:32:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:14.223 19:32:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:14.223 19:32:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:14.223 19:32:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:14.223 19:32:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:14.223 19:32:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:14.223 19:32:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:14.481 19:32:00 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:14.481 19:32:00 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:14.481 19:32:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:14.481 19:32:00 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:14.481 19:32:00 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:14.481 19:32:00 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:14.481 19:32:00 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.cnZrdojRft /tmp/tmp.FQ6kw70PjG 00:31:14.481 19:32:00 keyring_file -- keyring/file.sh@20 -- # killprocess 1732450 00:31:14.481 19:32:00 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1732450 ']' 00:31:14.481 19:32:00 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1732450 00:31:14.481 19:32:00 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:14.481 19:32:00 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:14.481 19:32:00 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1732450 00:31:14.481 19:32:00 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:14.481 19:32:00 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:14.481 19:32:00 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1732450' 00:31:14.481 killing process with pid 1732450 00:31:14.481 19:32:00 keyring_file -- common/autotest_common.sh@969 -- # kill 1732450 00:31:14.481 Received shutdown signal, test time was about 1.000000 seconds 00:31:14.481 00:31:14.481 Latency(us) 00:31:14.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.481 =================================================================================================================== 00:31:14.481 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:14.481 19:32:00 keyring_file -- common/autotest_common.sh@974 -- # wait 1732450 00:31:14.739 19:32:00 keyring_file -- keyring/file.sh@21 -- # killprocess 1730843 00:31:14.739 19:32:00 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1730843 ']' 00:31:14.739 19:32:00 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1730843 00:31:14.739 19:32:00 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:14.739 19:32:00 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:14.739 19:32:00 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1730843 00:31:14.739 19:32:00 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:14.739 19:32:00 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:14.739 19:32:00 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1730843' 00:31:14.739 killing process with pid 1730843 00:31:14.739 19:32:00 keyring_file -- common/autotest_common.sh@969 -- # kill 1730843 00:31:14.739 [2024-07-24 19:32:00.943840] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:14.739 19:32:00 keyring_file -- common/autotest_common.sh@974 -- # wait 1730843 00:31:15.305 00:31:15.305 real 0m11.982s 00:31:15.305 user 0m27.558s 00:31:15.305 sys 0m3.467s 00:31:15.305 19:32:01 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:15.305 19:32:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:15.305 ************************************ 00:31:15.305 END TEST keyring_file 00:31:15.305 ************************************ 00:31:15.305 19:32:01 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:31:15.305 19:32:01 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:15.305 19:32:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:15.305 19:32:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:15.305 19:32:01 -- common/autotest_common.sh@10 -- # set +x 00:31:15.305 ************************************ 00:31:15.305 START TEST keyring_linux 00:31:15.305 ************************************ 00:31:15.305 19:32:01 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:15.305 * Looking for test storage... 00:31:15.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:15.305 19:32:01 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:15.305 19:32:01 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.305 19:32:01 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.305 19:32:01 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.305 19:32:01 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.305 19:32:01 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.305 19:32:01 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.305 19:32:01 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.305 19:32:01 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:15.305 19:32:01 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:15.305 19:32:01 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:15.305 19:32:01 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:15.305 19:32:01 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:15.305 19:32:01 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:15.305 19:32:01 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:15.305 19:32:01 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:15.305 19:32:01 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:15.305 19:32:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:15.305 19:32:01 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:15.305 19:32:01 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:15.305 19:32:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:15.305 19:32:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:15.305 19:32:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:15.305 19:32:01 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:15.306 19:32:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:15.306 19:32:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:15.306 19:32:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:15.306 19:32:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:15.306 /tmp/:spdk-test:key0 00:31:15.306 19:32:01 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:15.306 19:32:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:15.306 19:32:01 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:15.306 19:32:01 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:15.306 19:32:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:15.306 19:32:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:15.306 19:32:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:15.306 19:32:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:15.306 19:32:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:15.306 19:32:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:15.306 19:32:01 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:15.306 19:32:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:15.306 19:32:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:15.564 19:32:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:15.564 19:32:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:15.564 /tmp/:spdk-test:key1 00:31:15.564 19:32:01 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1732975 00:31:15.564 19:32:01 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:15.564 19:32:01 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1732975 00:31:15.564 19:32:01 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1732975 ']' 00:31:15.564 19:32:01 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.564 19:32:01 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:15.564 19:32:01 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.564 19:32:01 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:15.564 19:32:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:15.564 [2024-07-24 19:32:01.623481] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:31:15.564 [2024-07-24 19:32:01.623537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1732975 ] 00:31:15.564 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.564 [2024-07-24 19:32:01.691469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.564 [2024-07-24 19:32:01.764379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.497 19:32:02 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:16.497 19:32:02 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:31:16.497 19:32:02 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:16.497 19:32:02 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.497 19:32:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:16.497 [2024-07-24 19:32:02.418510] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.497 null0 00:31:16.497 [2024-07-24 19:32:02.450569] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:16.497 [2024-07-24 19:32:02.450917] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:16.497 19:32:02 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.497 19:32:02 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:16.497 1044597880 00:31:16.497 19:32:02 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:16.498 308051150 00:31:16.498 19:32:02 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1733122 00:31:16.498 19:32:02 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1733122 /var/tmp/bperf.sock 00:31:16.498 19:32:02 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:16.498 19:32:02 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1733122 ']' 00:31:16.498 19:32:02 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:16.498 19:32:02 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:16.498 19:32:02 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:16.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:16.498 19:32:02 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:16.498 19:32:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:16.498 [2024-07-24 19:32:02.525119] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:31:16.498 [2024-07-24 19:32:02.525165] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733122 ] 00:31:16.498 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.498 [2024-07-24 19:32:02.593590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.498 [2024-07-24 19:32:02.668155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.430 19:32:03 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:17.430 19:32:03 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:31:17.430 19:32:03 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:17.430 19:32:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:17.430 19:32:03 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:17.430 19:32:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:17.689 19:32:03 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:17.689 19:32:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:17.689 [2024-07-24 19:32:03.904068] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:17.948 nvme0n1 00:31:17.948 19:32:04 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:17.948 19:32:04 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:17.948 19:32:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:17.948 19:32:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:17.948 19:32:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:17.948 19:32:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:17.948 19:32:04 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:17.948 19:32:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:17.948 19:32:04 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:17.948 19:32:04 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:17.948 19:32:04 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:18.207 19:32:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:18.207 19:32:04 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:18.207 19:32:04 keyring_linux -- keyring/linux.sh@25 -- # sn=1044597880 00:31:18.207 19:32:04 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:18.207 19:32:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:18.207 19:32:04 keyring_linux -- keyring/linux.sh@26 -- # [[ 1044597880 == \1\0\4\4\5\9\7\8\8\0 ]] 00:31:18.207 19:32:04 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1044597880 00:31:18.207 19:32:04 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:18.207 19:32:04 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:18.467 Running I/O for 1 seconds... 00:31:19.403 00:31:19.403 Latency(us) 00:31:19.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.403 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:19.403 nvme0n1 : 1.01 14386.00 56.20 0.00 0.00 8860.86 2857.37 12215.91 00:31:19.403 =================================================================================================================== 00:31:19.403 Total : 14386.00 56.20 0.00 0.00 8860.86 2857.37 12215.91 00:31:19.403 0 00:31:19.403 19:32:05 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:19.403 19:32:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:19.661 19:32:05 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:19.661 19:32:05 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:19.661 19:32:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:19.661 19:32:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:19.661 19:32:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:19.661 19:32:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.661 19:32:05 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:19.661 19:32:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:19.661 19:32:05 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:19.661 19:32:05 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:19.661 19:32:05 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:31:19.661 19:32:05 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:19.661 19:32:05 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:19.661 19:32:05 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:19.661 19:32:05 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:19.661 19:32:05 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:19.661 19:32:05 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:19.661 19:32:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:19.920 [2024-07-24 19:32:06.012632] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:19.920 [2024-07-24 19:32:06.012969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1e750 (107): Transport endpoint is not connected 00:31:19.920 [2024-07-24 19:32:06.013963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1e750 (9): Bad file descriptor 00:31:19.920 [2024-07-24 19:32:06.014964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:19.920 [2024-07-24 19:32:06.014977] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:19.920 [2024-07-24 19:32:06.014986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:19.920 request: 00:31:19.920 { 00:31:19.920 "name": "nvme0", 00:31:19.920 "trtype": "tcp", 00:31:19.920 "traddr": "127.0.0.1", 00:31:19.920 "adrfam": "ipv4", 00:31:19.920 "trsvcid": "4420", 00:31:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:19.920 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:19.920 "prchk_reftag": false, 00:31:19.920 "prchk_guard": false, 00:31:19.920 "hdgst": false, 00:31:19.920 "ddgst": false, 00:31:19.920 "psk": ":spdk-test:key1", 00:31:19.920 "method": "bdev_nvme_attach_controller", 00:31:19.920 "req_id": 1 00:31:19.920 } 00:31:19.920 Got JSON-RPC error response 00:31:19.920 response: 00:31:19.920 { 00:31:19.920 "code": -5, 00:31:19.920 "message": "Input/output error" 00:31:19.920 } 00:31:19.920 19:32:06 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:31:19.920 19:32:06 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:19.920 19:32:06 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:19.921 19:32:06 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@33 -- # sn=1044597880 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1044597880 00:31:19.921 1 links removed 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@33 -- # sn=308051150 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 308051150 00:31:19.921 1 links removed 00:31:19.921 19:32:06 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1733122 00:31:19.921 19:32:06 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1733122 ']' 00:31:19.921 19:32:06 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1733122 00:31:19.921 19:32:06 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:31:19.921 19:32:06 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:19.921 19:32:06 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1733122 00:31:19.921 19:32:06 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:19.921 19:32:06 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:19.921 19:32:06 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1733122' 00:31:19.921 killing process with pid 1733122 00:31:19.921 19:32:06 keyring_linux -- common/autotest_common.sh@969 -- # kill 1733122 00:31:19.921 Received shutdown signal, test time was about 1.000000 seconds 00:31:19.921 00:31:19.921 Latency(us) 00:31:19.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.921 =================================================================================================================== 00:31:19.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:19.921 19:32:06 keyring_linux -- common/autotest_common.sh@974 -- # wait 1733122 00:31:20.180 19:32:06 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1732975 00:31:20.180 19:32:06 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1732975 ']' 00:31:20.180 19:32:06 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1732975 00:31:20.180 19:32:06 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:31:20.180 19:32:06 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:20.180 19:32:06 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1732975 00:31:20.180 19:32:06 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:20.180 19:32:06 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:20.180 19:32:06 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1732975' 00:31:20.180 killing process with pid 1732975 00:31:20.180 19:32:06 keyring_linux -- common/autotest_common.sh@969 -- # kill 1732975 00:31:20.180 19:32:06 keyring_linux -- common/autotest_common.sh@974 -- # wait 1732975 00:31:20.438 00:31:20.438 real 0m5.288s 00:31:20.438 user 0m9.088s 00:31:20.438 sys 0m1.770s 00:31:20.438 19:32:06 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:20.438 19:32:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:20.438 ************************************ 00:31:20.438 END TEST keyring_linux 00:31:20.438 ************************************ 00:31:20.438 19:32:06 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:20.438 19:32:06 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:20.438 19:32:06 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:20.438 19:32:06 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:31:20.438 19:32:06 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:31:20.438 19:32:06 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:20.438 19:32:06 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:20.438 19:32:06 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:20.438 19:32:06 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:31:20.438 19:32:06 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:20.438 19:32:06 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:31:20.438 19:32:06 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:20.439 19:32:06 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:20.439 19:32:06 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:20.439 19:32:06 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:31:20.439 19:32:06 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:31:20.439 19:32:06 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:31:20.439 19:32:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:20.439 19:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:20.439 19:32:06 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:31:20.439 19:32:06 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:20.439 19:32:06 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:20.439 19:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:27.004 INFO: APP EXITING 00:31:27.004 INFO: killing all VMs 00:31:27.004 INFO: killing vhost app 00:31:27.004 INFO: EXIT DONE 00:31:29.593 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:31:29.593 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:31:32.895 Cleaning 00:31:32.895 Removing: /var/run/dpdk/spdk0/config 00:31:32.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:32.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:32.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:32.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:32.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:32.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:32.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:32.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:32.895 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:32.895 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:32.895 Removing: /var/run/dpdk/spdk1/config 00:31:32.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:32.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:32.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:32.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:32.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:32.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:32.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:32.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:32.895 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:32.895 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:32.895 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:32.895 Removing: /var/run/dpdk/spdk2/config 00:31:32.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:32.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:32.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:32.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:32.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:32.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:32.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:32.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:32.895 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:32.895 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:32.895 Removing: /var/run/dpdk/spdk3/config 00:31:32.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:32.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:32.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:32.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:32.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:32.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:32.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:32.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:32.895 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:32.895 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:32.895 Removing: /var/run/dpdk/spdk4/config 00:31:32.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:32.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:32.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:32.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:32.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:32.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:32.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:32.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:32.895 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:32.895 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:32.895 Removing: /dev/shm/bdev_svc_trace.1 00:31:32.895 Removing: /dev/shm/nvmf_trace.0 00:31:32.895 Removing: /dev/shm/spdk_tgt_trace.pid1334619 00:31:32.895 Removing: /var/run/dpdk/spdk0 00:31:32.895 Removing: /var/run/dpdk/spdk1 00:31:32.895 Removing: /var/run/dpdk/spdk2 00:31:32.895 Removing: /var/run/dpdk/spdk3 00:31:32.895 Removing: /var/run/dpdk/spdk4 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1332134 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1333408 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1334619 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1335313 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1336187 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1336420 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1337526 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1337651 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1337909 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1339615 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1341054 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1341367 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1341691 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1342019 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1342347 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1342629 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1342836 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1343091 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1344014 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1347506 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1347812 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1348128 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1348363 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1348925 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1349043 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1349502 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1349763 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1350065 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1350181 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1350362 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1350626 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1351004 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1351285 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1351602 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1355693 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1360209 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1370369 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1371116 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1375654 00:31:32.895 Removing: /var/run/dpdk/spdk_pid1375936 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1380455 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1386571 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1389372 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1401031 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1410507 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1412241 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1413175 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1431110 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1435141 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1481806 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1487418 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1494066 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1500307 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1500314 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1501106 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1502103 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1502945 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1503475 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1503489 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1503747 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1503983 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1504008 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1504818 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1505724 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1506662 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1507192 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1507194 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1507464 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1508750 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1509852 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1518364 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1543980 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1548771 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1550352 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1552231 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1552462 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1552734 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1553004 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1553580 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1555431 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1556548 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1557036 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1559257 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1559918 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1560652 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1564945 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1576164 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1580256 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1586490 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1587875 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1589463 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1594053 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1598422 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1606265 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1606294 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1611154 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1611324 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1611572 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1612091 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1612103 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1617417 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1618023 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1622680 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1625330 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1631117 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1636654 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1645525 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1652998 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1653000 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1672310 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1672926 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1673560 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1674263 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1675120 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1675896 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1676460 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1677124 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1681509 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1681785 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1688098 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1688339 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1690678 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1698660 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1698733 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1704250 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1706320 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1708746 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1709966 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1712062 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1713215 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1722255 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1722778 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1723304 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1725706 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1726158 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1726601 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1730843 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1730901 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1732450 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1732975 00:31:32.896 Removing: /var/run/dpdk/spdk_pid1733122 00:31:33.156 Clean 00:31:33.156 19:32:19 -- common/autotest_common.sh@1451 -- # return 0 00:31:33.156 19:32:19 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:31:33.156 19:32:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:33.156 19:32:19 -- common/autotest_common.sh@10 -- # set +x 00:31:33.156 19:32:19 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:31:33.156 19:32:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:33.156 19:32:19 -- common/autotest_common.sh@10 -- # set +x 00:31:33.156 19:32:19 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:33.156 19:32:19 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:31:33.156 19:32:19 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:31:33.156 19:32:19 -- spdk/autotest.sh@395 -- # hash lcov 00:31:33.156 19:32:19 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:33.156 19:32:19 -- spdk/autotest.sh@397 -- # hostname 00:31:33.156 19:32:19 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:31:33.416 geninfo: WARNING: invalid characters removed from testname! 00:31:55.365 19:32:39 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:56.303 19:32:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:57.683 19:32:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:59.589 19:32:45 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:01.497 19:32:47 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:02.876 19:32:48 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:04.825 19:32:50 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:04.825 19:32:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.825 19:32:50 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:04.825 19:32:50 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.825 19:32:50 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.825 19:32:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.825 19:32:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.825 19:32:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.825 19:32:50 -- paths/export.sh@5 -- $ export PATH 00:32:04.825 19:32:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.825 19:32:50 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:04.825 19:32:50 -- common/autobuild_common.sh@447 -- $ date +%s 00:32:04.825 19:32:50 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721842370.XXXXXX 00:32:04.825 19:32:50 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721842370.cVa5eN 00:32:04.825 19:32:50 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:32:04.825 19:32:50 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:32:04.825 19:32:50 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:04.825 19:32:50 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:04.825 19:32:50 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:04.825 19:32:50 -- common/autobuild_common.sh@463 -- $ get_config_params 00:32:04.825 19:32:50 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:32:04.825 19:32:50 -- common/autotest_common.sh@10 -- $ set +x 00:32:04.825 19:32:50 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:32:04.825 19:32:50 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:32:04.825 19:32:50 -- pm/common@17 -- $ local monitor 00:32:04.825 19:32:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:04.825 19:32:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:04.825 19:32:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:04.825 19:32:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:04.825 19:32:50 -- pm/common@25 -- $ sleep 1 00:32:04.825 19:32:50 -- pm/common@21 -- $ date +%s 00:32:04.825 19:32:50 -- pm/common@21 -- $ date +%s 00:32:04.825 19:32:50 -- pm/common@21 -- $ date +%s 00:32:04.825 19:32:50 -- pm/common@21 -- $ date +%s 00:32:04.825 19:32:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721842370 00:32:04.825 19:32:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721842370 00:32:04.825 19:32:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721842370 00:32:04.825 19:32:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721842370 00:32:04.825 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721842370_collect-cpu-temp.pm.log 00:32:04.825 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721842370_collect-vmstat.pm.log 00:32:04.825 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721842370_collect-cpu-load.pm.log 00:32:04.825 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721842370_collect-bmc-pm.bmc.pm.log 00:32:05.763 19:32:51 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:32:05.763 19:32:51 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:32:05.763 19:32:51 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:05.763 19:32:51 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:05.763 19:32:51 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:05.763 19:32:51 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:05.763 19:32:51 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:05.763 19:32:51 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:05.763 19:32:51 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:05.763 19:32:51 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:05.763 19:32:51 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:05.763 19:32:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:05.763 19:32:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:05.763 19:32:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:05.763 19:32:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:05.763 19:32:51 -- pm/common@44 -- $ pid=1743891 00:32:05.763 19:32:51 -- pm/common@50 -- $ kill -TERM 1743891 00:32:05.763 19:32:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:05.763 19:32:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:05.763 19:32:51 -- pm/common@44 -- $ pid=1743892 00:32:05.763 19:32:51 -- pm/common@50 -- $ kill -TERM 1743892 00:32:05.763 19:32:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:05.763 19:32:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:05.763 19:32:51 -- pm/common@44 -- $ pid=1743894 00:32:05.763 19:32:51 -- pm/common@50 -- $ kill -TERM 1743894 00:32:05.763 19:32:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:05.763 19:32:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:05.763 19:32:51 -- pm/common@44 -- $ pid=1743918 00:32:05.763 19:32:51 -- pm/common@50 -- $ sudo -E kill -TERM 1743918 00:32:05.763 + [[ -n 1222854 ]] 00:32:05.763 + sudo kill 1222854 00:32:05.772 [Pipeline] } 00:32:05.789 [Pipeline] // stage 00:32:05.794 [Pipeline] } 00:32:05.810 [Pipeline] // timeout 00:32:05.815 [Pipeline] } 00:32:05.832 [Pipeline] // catchError 00:32:05.837 [Pipeline] } 00:32:05.854 [Pipeline] // wrap 00:32:05.860 [Pipeline] } 00:32:05.875 [Pipeline] // catchError 00:32:05.885 [Pipeline] stage 00:32:05.887 [Pipeline] { (Epilogue) 00:32:05.902 [Pipeline] catchError 00:32:05.904 [Pipeline] { 00:32:05.918 [Pipeline] echo 00:32:05.919 Cleanup processes 00:32:05.925 [Pipeline] sh 00:32:06.211 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:06.211 1743999 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:32:06.211 1744339 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:06.225 [Pipeline] sh 00:32:06.509 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:06.509 ++ grep -v 'sudo pgrep' 00:32:06.509 ++ awk '{print $1}' 00:32:06.509 + sudo kill -9 1743999 00:32:06.521 [Pipeline] sh 00:32:06.804 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:06.804 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:32:10.995 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:32:15.202 [Pipeline] sh 00:32:15.485 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:15.485 Artifacts sizes are good 00:32:15.502 [Pipeline] archiveArtifacts 00:32:15.510 Archiving artifacts 00:32:15.681 [Pipeline] sh 00:32:15.964 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:15.976 [Pipeline] cleanWs 00:32:15.985 [WS-CLEANUP] Deleting project workspace... 00:32:15.985 [WS-CLEANUP] Deferred wipeout is used... 00:32:15.992 [WS-CLEANUP] done 00:32:15.994 [Pipeline] } 00:32:16.013 [Pipeline] // catchError 00:32:16.023 [Pipeline] sh 00:32:16.306 + logger -p user.info -t JENKINS-CI 00:32:16.315 [Pipeline] } 00:32:16.326 [Pipeline] // stage 00:32:16.329 [Pipeline] } 00:32:16.343 [Pipeline] // node 00:32:16.348 [Pipeline] End of Pipeline 00:32:16.371 Finished: SUCCESS